promptcmd

GenAI prompts as runnable programs

Turn this
 $ ls myprompts/

summarize_200words.txt
summarize_400words.txt
summarize_400words_simplified.txt
summarize_400words_english.txt
summarize_400words_german.txt
Into that
$ summarize --help

Usage: summarize [OPTIONS] --to <to>

Options:
--text <text>   Text to Summarize. Defaults to stdin
--lang <lang>   Output in a given language
--words <words> Summary length
--style <style> [possible values: simplified, eli5]
--dry           Dry run
-h, --help      Print help

Create custom CLI commands powered by AI. No plugins, no friction.
Just pure command-line magic.

Get Started in Seconds

Choose your installation method

curl -LsSf https://installer.promptcmd.sh | sh

For Linux and macOS

brew install tgalal/tap/promptcmd

For macOS via Homebrew

powershell -ExecutionPolicy Bypass -c "irm https://installer-ps.promptcmd.sh | iex"

For Windows via PowerShell

For Windows via MSI package

See It In Action

Describe Once, Execute Anywhere

Define prompt as simple handlebars templates, then execute across any LLM provider. Write your prompt logic once and transform into CLI programs with clear, usable arguments.

Start with:

promptctl create docker-inspect-logs
# Analyze nginx access logs and generate report
sh-5.3$ cat nginx-access-logs | \
          nginx-report | \
          render-md --style minimal > nginx-report.html


# Prepopulate git commit message based on diff
sh-5.3$ git diff --staged | \
          commitmsg --style conventional | \
          git commit -e --file -





~

Built for the Command line

Pipe command output directly into prompts, chain through bash pipelines, and compose workflows using familiar Unix patterns.

sh-5.3$ promptctl cat logs-aggregator

---
---
You will be given several summarize logs of several docker containers. Your
task is to summarize their findings in a short markdown report, grouped by
container as a section.

At the end of the report, make sure to highlight any problems, recommendations,
or actions to take.

## Postgres:
{{prompt "docker-inspect-logs" container="postgres"}}

## Nginx
{{prompt "docker-inspect-logs" container="nginx"}}

## Redis
{{prompt "docker-inspect-logs" container="redis"}}
~

"Promptception"

Nest prompts within prompts for true modularity. Let each prompt do one thing well, then compose them into workflows without managing intermediate states yourself. Assign each to the best-fit model based on complexity or cost, building powerful reasoning from simple, reusable building blocks.

sh-5.3$ promptctl config edit

# Google models execute twice as much as anthropic's
[groups.coding_group]
providers = [
  { name = "google",    weight = 2 },
  { name = "anthropic", weight = 1 },
]

~

sh-5.3$ promptctl cat rust-coder

---
model: coding_group
---
Fix the following rust code: {{STDIN}}


~

Distribute Usage

Distribute requests across multiple models with flexible load balancing strategies. Split traffic evenly or based on cost to amortize expenses across providers.

sh-5.3$ promptctl stats

provider       model                         runs     prompt tokens     completion
anthropic      claude-opus-4-5               10       405               309
anthropic      claude-sonnet-4-5             7        925               844
google         gemini-2.5-flash              12       3035              6238
openai         gpt-5-mini-2025-08-07         11       4940              13953
openrouter     anthropic/claude-sonnet-4     3        124               103










~

Monitor Usage

Track token consumption across all your prompts and models. Get visibility into your LLM usage patterns and make data-driven decisions about model selection and optimization.

sh-5.3$ promptctl

Usage: promptctl <COMMAND>

Commands:
  edit     Edit an existing prompt file
  enable   Enable a prompt
  disable  Disable a prompt
  create   Create a new prompt file [aliases: new]
  list     List commands and prompts [aliases: ls]
  cat      Print promptfile contents
  run      Run promptfile
  import   Import promptfile
  stats    Print statistics
  resolve  Resolve model name
  config   Display and edit your config.toml
  help     Print this message or the help of the given subcommand(s)

Command your Prompts

Manage your entire prompt library through the intuitive promptctl CLI. List, inspect, execute, and monitor your prompts with simple commands.

Related Tools

promptcmdllmrunpromptclaude-switcher
Usageecho "hello world" | translate --to DE
echo "hello world" | promptcmd translate.prompt --to DE
echo "hello world" | ./translate.prompt --to DE echo "hello world" | promptctl run translate -- --to DE
echo "hello world" | llm -t translate -p to DEecho '{"to": "DE"}' | ./runprompt translate.prompt
echo '{"to": "DE"}' | ./translate.prompt
Prompt Files (Templating)Dotprompt
(picoschema + handlebars)
yamlDotprompt
(picoschema + handlebars)
markdown
Direct Prompt File Executionshebang
symlink in PATH
shebangshebang
ParametersStandard-style command line arguments
Typed: strings, numbers, boolean flags, choices
Optional description for each argument
-p key value
Strings only
No description
JSON-parseable strings
History/StatisticsStats tracking (token/cost monitoring).DB for all history
Prompt Management (via promptctl)
Model Selectionglobal, per provider, per model,
in prompt file, or automatically via load balancer
in prompt file's frontmatter
or argument at execution
Chat mode
Load Balancing
CachingSession-scoped
for resumption
Tool Callingnot yet

Ready to Transform Your Workflow?

Start building AI-powered commands in minutes

curl -LsSf https://installer.promptcmd.sh | sh
brew install tgalal/tap/promptcmd
powershell -ExecutionPolicy Bypass -c "irm https://installer-ps.promptcmd.sh | iex"