GenAI prompts as runnable programs
$ ls myprompts/ summarize_200words.txt summarize_400words.txt summarize_400words_simplified.txt summarize_400words_english.txt summarize_400words_german.txt
$ summarize --help Usage: summarize --to Options: --text Text to Summarize. Defaults to stdin --lang Output in a given language --words Summary length --style [possible values: simplified, eli5] --dry Dry run -h, --help Print help
Create custom CLI commands powered by AI. No plugins, no friction.
Just pure command-line magic.
Choose your installation method
curl -LsSf https://installer.promptcmd.sh | sh
For Linux and macOS
brew install tgalal/tap/promptcmd
For macOS via Homebrew
powershell -ExecutionPolicy Bypass -c "irm https://installer-ps.promptcmd.sh | iex"
For Windows via PowerShell
For Windows via MSI package
Define prompt as simple handlebars templates, then execute across any LLM provider. Write your prompt logic once and transform into CLI programs with clear, usable arguments.
Start with:
# Analyze nginx access logs and generate report sh-5.3$ cat nginx-access-logs | \ nginx-report | \ render-md --style minimal > nginx-report.html # Prepopulate git commit message based on diff sh-5.3$ git diff --staged | \ commitmsg --style conventional | \ git commit -e --file - ~
Pipe command output directly into prompts, chain through bash pipelines, and compose workflows using familiar Unix patterns.
sh-5.3$ promptctl cat logs-aggregator --- --- You will be given several summarize logs of several docker containers. Your task is to summarize their findings in a short markdown report, grouped by container as a section. At the end of the report, make sure to highlight any problems, recommendations, or actions to take. ## Postgres: {{prompt "docker-inspect-logs" container="postgres"}} ## Nginx {{prompt "docker-inspect-logs" container="nginx"}} ## Redis {{prompt "docker-inspect-logs" container="redis"}} ~
Nest prompts within prompts for true modularity. Let each prompt do one thing well, then compose them into workflows without managing intermediate states yourself. Assign each to the best-fit model based on complexity or cost, building powerful reasoning from simple, reusable building blocks.
sh-5.3$ promptctl config edit
# Google models execute twice as much as anthropic's
[groups.coding_group]
providers = [
{ name = "google", weight = 2 },
{ name = "anthropic", weight = 1 },
]
~
sh-5.3$ promptctl cat rust-coder
---
model: coding_group
---
Fix the following rust code: {{STDIN}}
~
Distribute requests across multiple models with flexible load balancing strategies. Split traffic evenly or based on cost to amortize expenses across providers.
sh-5.3$ promptctl stats provider model runs prompt tokens completion anthropic claude-opus-4-5 10 405 309 anthropic claude-sonnet-4-5 7 925 844 google gemini-2.5-flash 12 3035 6238 openai gpt-5-mini-2025-08-07 11 4940 13953 openrouter anthropic/claude-sonnet-4 3 124 103 ~
Track token consumption across all your prompts and models. Get visibility into your LLM usage patterns and make data-driven decisions about model selection and optimization.
sh-5.3$ promptctl Usage: promptctl <COMMAND> Commands: edit Edit an existing prompt file enable Enable a prompt disable Disable a prompt create Create a new prompt file [aliases: new] list List commands and prompts [aliases: ls] cat Print promptfile contents run Run promptfile import Import promptfile stats Print statistics resolve Resolve model name config Display and edit your config.toml help Print this message or the help of the given subcommand(s)
Manage your entire prompt library through the intuitive promptctl CLI. List, inspect, execute, and monitor your prompts with simple commands.
| promptcmd | llm | runprompt | claude-switcher | |
|---|---|---|---|---|
| Usage | echo "hello world" | translate --to DE echo "hello world" | promptcmd translate.prompt --to DE echo "hello world" | ./translate.prompt --to DE echo "hello world" | promptctl run translate -- --to DE | echo "hello world" | llm -t translate -p to DE | echo '{"to": "DE"}' | ./runprompt translate.prompt echo '{"to": "DE"}' | ./translate.prompt | |
| Prompt Files (Templating) | Dotprompt (picoschema + handlebars) | yaml | Dotprompt (picoschema + handlebars) | markdown |
| Direct Prompt File Execution | shebang symlink in PATH | ✗ | shebang | shebang |
| Parameters | Standard-style command line arguments Typed: strings, numbers, boolean flags, choices Optional description for each argument | -p key value Strings only No description | JSON-parseable strings | ✗ |
| History/Statistics | Stats tracking (token/cost monitoring). | DB for all history | ✗ | ✗ |
| Prompt Management | ✓ (via promptctl) | ✓ | ✗ | ✗ |
| Model Selection | global, per provider, per model, in prompt file, or automatically via load balancer | in prompt file's frontmatter or argument at execution | ||
| Chat mode | ✗ | ✓ | ✗ | ✗ |
| Load Balancing | ✓ | ✗ | ✗ | ✗ |
| Caching | ✓ | ✓ | Session-scoped for resumption | |
| Tool Calling | not yet | ✓ | ✓ | ✓ |
Start building AI-powered commands in minutes
curl -LsSf https://installer.promptcmd.sh | sh
brew install tgalal/tap/promptcmd
powershell -ExecutionPolicy Bypass -c "irm https://installer-ps.promptcmd.sh | iex"