A self-evolving prompt manager that automatically optimizes your prompts — smart, easy access, and always improving.
Local-first, terminal-based prompt versioning & optimization with DSPy.
- DSPy-powered optimization – improve prompts with few-shot learning in one command
- Zero hassle setup – one script installs, another launches
- Model-agnostic – configure any LLM provider (OpenAI, Anthropic, etc.)
- Terminal-native – works where you work, rich TUI, no web app
- No Git required – automatic local versioning with snapshots
Run the setup script once to install Promptterfly and its dependencies:
./setup.shIt will:
- Create a virtual environment (
.venv/) if needed - Install the package and optional extras (test/dev) of your choice
- Mark the installation as complete
After installation, start the interactive Promptterfly shell:
./start.shYou can also run single commands directly without entering the REPL:
./start.sh prompt list
./start.sh version history <id>Or activate the virtual environment manually and use the CLI:
source .venv/bin/activate
promptterfly --helpIf you're new to Promptterfly, follow these steps:
-
Initialize your project (run inside your project directory):
init
This creates a
.promptterfly/directory with default configuration and aprompts/subfolder. -
Add an LLM model (required for optimization):
model add mymodel --provider openai --model gpt-4 --api-key-env OPENAI_API_KEY model set-default mymodel
You can use any LiteLLM-supported provider (Anthropic, Google, local, etc.).
-
Create your first prompt:
prompt create
You'll be prompted for a name, optional description, and the template (use
{variable}placeholders). -
Render with variables (optional): Create a JSON file with values for your template variables, e.g.:
{ "name": "Alice", "count": 5 }Then run:
prompt render <prompt_id> vars.json
-
Optimize your prompt:
optimize improve <prompt_id> --strategy few_shot
Requires a dataset (default:
.promptterfly/dataset.jsonl). See "Optimization" below. -
Explore version history:
version history <prompt_id> version restore <prompt_id> <version_number>
init [--path dir] Initialize project
prompt list List prompts
prompt show <id> Show prompt template + metadata
prompt create Create prompt interactively
prompt update <id> Edit prompt
prompt delete <id> Delete prompt
prompt render <id> [vars.json] Render with variables from JSON
version history <id> Show version history
version restore <id> <ver> Restore a previous version
optimize improve <id> [--strategy few_shot] [--dataset path] Run optimization
model list List configured models
model add <name> [--provider ...] Add model (interactive or flags)
model remove <name> Remove a model
model set-default <name> Set default model for optimization
config show Show configuration
config set <key> <value> Update configuration value
help Show this help (includes all aliases)
exit / quit Leave REPL
The REPL supports short aliases for common commands:
ls→prompt listneworcreate→prompt createshow <id>→prompt show <id>del→prompt deleterun→prompt renderhist→version historyrestore→version restoreopt→optimize improvemodels→model listaddmodel→model addsetmodel→model set-default
Type help inside the REPL to see the full alias mapping.
Prompt templates use Python-style {variable} placeholders. At render time, you supply a JSON file containing key-value pairs for those variables.
Example:
Template:
Hello {customer_name}! Thank you for contacting {company}. How can we assist you today?
JSON (vars.json):
{
"customer_name": "Alice",
"company": "Acme Inc"
}Render command:
prompt render <prompt_id> vars.jsonOutput:
Hello Alice! Thank you for contacting Acme Inc. How can we assist you today?
Rules:
- All variables used in the template must be present in the JSON, otherwise a
KeyErroris raised. - Extra keys in the JSON are ignored.
- You can also pipe the JSON via stdin if your shell supports it (e.g.,
echo '{"x":1}' | prompt render <id> -is not yet supported; currently a file path is required).
Promptterfly includes a built-in fuzzy search to quickly locate prompts by name, description, or template content.
- Run
prompt find <query>(or aliasessearch,f) to search. - The search scores matches across name, description, and template. If the best match is confident (≥80% similarity), it is displayed immediately.
- Otherwise, the top 3 results are shown, and you can choose which one to view.
- This eliminates the need for tags; use descriptive names and detailed descriptions for better recall.
Promptterfly uses DSPy to automatically improve your prompts.
How it works:
- You provide a dataset of input/output examples (
.promptterfly/dataset.jsonl). Each line is a JSON object with at least aninputfield (what the user says) and acompletionfield (the ideal output). optimize improvebuilds a signature from your template's variables and usesBootstrapFewShotto select the most effective few-shot examples.- The optimized prompt is saved as a new version.
The original prompt is automatically snapshotted before optimization, so you can always roll back.
.project-root/
├── .promptterfly/
│ ├── config.yaml
│ ├── models.yaml
│ ├── counter # stores last used integer ID
│ ├── prompts/
│ │ └── <prompt_id>.json # current state (e.g., 1.json, 2.json)
│ └── versions/
│ └── <prompt_id>/
│ ├── 001.json # snapshots (auto-version)
│ ├── 002.json
│ └── ...
prompt list– table of all promptsprompt show <id>– display template and metadataprompt create– interactive creationprompt update <id>– edit fieldsprompt delete <id>– remove prompt and its versionsprompt render <id> [vars.json]– render with variables
version history <id>– list snapshotsversion restore <id> <version>– rollback to a snapshot
optimize improve <id> [--strategy few_shot] [--dataset path]– run DSPy optimization (creates new version)
model list– show configured modelsmodel add <name>– interactive add (provider, model, api_key_env)model remove <name>– delete a modelmodel set-default <name>– set default for optimization
init [--path dir]– initialize projectconfig show– print configurationconfig set <key> <value>– update configuration
Run help inside the REPL or see the Quick Reference above for details.
optimize improvetakes your prompt and a dataset (default.promptterfly/dataset.jsonl) of input/output examples.- DSPy builds a signature from your prompt’s variables and compiles a few-shot module via
BootstrapFewShot. - The best demonstrations are selected and appended as an
Examples:section. - A new prompt version is saved automatically.
You provide quality examples; the tool selects the most effective ones.
# Run test suite (after setup)
./scripts/test_all.sh
# Or manually
source .venv/bin/activate
pytest -v --cov=promptterfly --cov-report=term-missingArchitecture and implementation details are in docs/development.md.
- Python 3.11+
- (optional) virtual environment tooling (
venv) - LLM provider API key for optimization (e.g.,
OPENAI_API_KEY)
MIT