Skip to content

dirmacs/pawan

pawan

पवन — pawan

Self-healing CLI coding agent. Rust. 29 tools. AST + LSP powers. Runs on your hardware.
No subscription. No telemetry. No vendor lock-in.

MIT License Rust


Pawan reads, writes, and heals code. It has a tool-calling loop, streaming TUI, git integration, AST-level code rewriting, and works with any OpenAI-compatible API — NVIDIA NIM, MLX, Ollama, or your own endpoint.

Built by DIRMACS. Named after Power Star Pawan Kalyan — martial artist, Telugu cinema icon, Deputy CM of Andhra Pradesh. That energy: raw power, cult following, fearless execution.

Install

cargo install pawan

# Or from source
git clone https://github.com/dirmacs/pawan && cd pawan
cargo install --path crates/pawan-cli
# NVIDIA NIM (free tier)
export NVIDIA_API_KEY=nvapi-...
pawan

# Local MLX on Mac (no key needed, $0 inference)
# Start mlx_lm.server, then:
PAWAN_PROVIDER=mlx pawan

# Local Ollama
PAWAN_PROVIDER=ollama PAWAN_MODEL=llama3.2 pawan

What it does

pawan                  # interactive TUI with streaming markdown
pawan heal             # auto-fix compilation errors, warnings, test failures
pawan task "..."       # execute a coding task
pawan commit -a        # AI-generated commit messages
pawan review           # AI code review of current changes
pawan test --fix       # run tests, AI-analyze and fix failures
pawan explain src/x.rs # explain code
pawan run "prompt"     # headless single-prompt (for scripting)
pawan watch -i 10      # poll cargo check, auto-heal on errors
pawan tasks ready      # show actionable unblocked beads
pawan doctor           # diagnose setup issues

Tools (29)

Category Tools
File read, write, edit (anchor-mode + string-replace), insert_after, append, list_directory
Search glob, grep, ripgrep (native rg), fd (native)
Code Intelligence ast_grep — AST-level structural search and rewrite via tree-sitter
Shell bash, sd (find-replace), tree (erdtree), mise (runtime manager), zoxide
Git status, diff, add, commit, log, blame, branch, checkout, stash
Agent spawn_agent, spawn_agents (parallel sub-agents)
MCP Dynamic tool discovery from any MCP server

ast-grep — structural code manipulation

# Find all unwrap() calls across the codebase
ast_grep(action="search", pattern="$EXPR.unwrap()", lang="rust", path="src/")

# Replace them with ? operator in one shot
ast_grep(action="rewrite", pattern="$EXPR.unwrap()", rewrite="$EXPR?", lang="rust", path="src/")

Matches by syntax tree structure, not text. $VAR for single-node wildcards, $$$VAR for variadic.

Architecture

pawan/
  crates/
    pawan-core/    # library — agent engine, 29 tools, config, healing
    pawan-cli/     # binary — CLI + ratatui TUI + AI workflows
    pawan-web/     # HTTP API — Axum SSE server (port 3300)
    pawan-mcp/     # MCP client (rmcp 0.12, stdio transport)
    pawan-aegis/   # aegis config resolution
  grind/           # autonomous data structure workspace

Safety & intelligence features

  • Compile-gated confidence — auto-runs cargo check after writing .rs files, injects errors back for self-correction
  • Path normalization — detects and corrects double workspace prefix bug in all file tools
  • Token budget tracking — separates thinking tokens from action tokens per call, visible in TUI (think:130 act:270)
  • Iteration budget awareness — warns model when 3 tool iterations remain
  • Think-token stripping — strips <think>...</think> from content and tool arguments

TUI (v0.3.0)

  • Welcome screen — model, version, workspace on first launch. Press any key to dismiss.
  • Command palette (Ctrl+P) — fuzzy-searchable slash commands with model presets
  • F1 help overlay — keyboard shortcuts reference, organized by category
  • Split layout — activity panel slides in during processing (72/28 split)
  • Slash commands/model, /search, /heal, /export, /tools, /clear, /quit, /help
  • Message timestamps — relative time (now, 5s, 2m, 1h) on each message
  • Scroll position[2/5] indicator in messages title bar
  • Session stats — tool calls, files edited, message count in status bar
  • Conversation export/export [path] saves to markdown with tool call details
  • Dynamic input — auto-resizes 3-10 lines based on content
  • Streaming markdown — bold, code, italic, headers, lists rendered in real-time
  • vim-like navigationj/k, g/G, Ctrl+U/D, /search, n/N

Intelligence (2026-03-20)

Mistral Small 4 119B — primary model. First to achieve 100% autonomous score (interval tree 6/6). 11 NIM models benchmarked.

ast-grep + LSP — AST-level code search/rewrite + rust-analyzer powered intelligence. Structural refactors in one tool call.

Token budgetreasoning_tokens / action_tokens tracked per call. thinking_budget config caps thinking. TUI shows think:N act:N split.

Auto-install + tiered registry — missing CLI tools auto-install via mise. 29 tools in 3 tiers (Core/Standard/Extended).

Configuration

Priority: CLI flags > env vars > pawan.toml > ~/.config/pawan/pawan.toml > defaults

PAWAN_PROVIDER=nvidia           # nvidia | ollama | openai | mlx
PAWAN_MODEL=mistralai/mistral-small-4-119b-2603
PAWAN_TEMPERATURE=0.6
PAWAN_MAX_TOKENS=4096
NVIDIA_API_KEY=nvapi-...
# pawan.toml
provider = "nvidia"
model = "mistralai/mistral-small-4-119b-2603"
temperature = 0.6
max_tokens = 4096
max_tool_iterations = 20
thinking_budget = 0

[cloud]
provider = "nvidia"
model = "stepfun-ai/step-3.5-flash"

[eruka]
enabled = true
url = "http://localhost:8081"

[mcp.daedra]
command = "daedra"
args = ["serve", "--transport", "stdio", "--quiet"]

Hybrid routing

Pawan supports local-first inference with cloud fallback:

  1. Local (primary) — MLX on Mac M4 / Ollama / llama.cpp — $0/token
  2. Cloud (fallback) — NVIDIA NIM StepFun Flash — automatic failover when local is down

Zero-cost local inference with cloud reliability as a safety net.

Model triage (11 models tested)

Model Provider Coding Tool Calling Notes
Mistral Small 4 119B NIM Best (6/6 interval tree) Good First 100% autonomous score. Self-corrects via semantic reasoning.
StepFun Flash NIM Good Best (98.9%) Best for orchestration and multi-step tool chains.
MiniMax M2.5 NIM Good (4/5 B-Tree) Good Tied with Mistral on B-Tree, fewer tool calls.
Qwen3.5-9B-OptiQ MLX Execution only 85% Best local. 17-18 tok/s, $0. Can't generate complex algos.

Full triage: dirmacs.github.io/pawan/triage/

Ecosystem

Project What
ares Agentic retrieval-enhanced server (RAG, embeddings, multi-provider LLM)
eruka Context intelligence engine (knowledge graph, memory tiers, decay)
aegis Config management + WireGuard mesh overlay (aegis-net)
doltares Orchestration daemon (DAG workflows, council/consultant nodes)
doltclaw Minimal Rust agent runtime
nimakai NIM model latency benchmarker (Nim)
daedra Self-contained web search MCP server (7 backends, automatic fallback)

License

MIT

About

Pawan (पवन) — Self-healing, self-improving CLI coding agent

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors