Local code review, powered by your machine. By default Stet talks to Ollama on your computer: no cloud API keys required for that path, and prompts stay on your machine. Uses the hardware you already have—no extra cost.
- Free: Default setup runs locally; no per-seat or per-request fees to the vendor for typical Ollama use.
- Private (default path): With Ollama on localhost, code and prompts are not sent to a remote SaaS.
- Review-only: Focused on review, not auto-fix; you decide what to change.
- Sustainable: Default setup uses your local hardware instead of datacenter APIs.
Stet is a Latin word from proofreading meaning "let it stand"—an instruction to keep existing text unchanged. Stet is review-only: it helps you approve or flag code, not rewrite it.
- Git
- Default LLM — Ollama: Install Ollama, run
ollama serve, and pull a model (e.g.ollama pull qwen3-coder:30b). Override the model in.review/config.tomlor withSTET_MODEL(e.g.qwen2.5-coder:32b). - Optional — OpenAI-compatible HTTP API: See OpenAI-compatible API (optional) below; no Ollama install required for that mode.
- Set up your LLM (Ollama by default; see Prerequisites).
- Install stet (see Installation).
- From your repo root:
stet doctorthenstet start.
stet doctor checks Git and reachability of your configured LLM (Ollama or OpenAI-compat base URL). On success it may still print Ollama OK even when provider = "openai"; use exit code 0 and the Model: line as the signal.
A typical review cycle on a branch with new commits:
- Check environment — Run
stet doctorto verify Git and the configured LLM endpoint (optional but recommended once). - Start the review — Run
stet startorstet start HEAD~3to review the last 3 commits; wait for the run to complete. - Inspect findings — Use
stet statusorstet listto see findings and IDs. In the Cursor extension, use the findings panel and “Copy for chat.” - Triage — Run
stet dismiss <id>orstet dismiss <id> <reason>for findings you want to ignore (reasons:false_positive,already_correct,wrong_suggestion,out_of_scope); fix code as needed. For when to use each reason, see Choosing a dismissal reason. - Re-review — Run
stet runto re-review only changed hunks. Findings that the model no longer reports (e.g. because you fixed the code) are automatically dismissed, so the active list shrinks as issues are fixed—no need to manually dismiss each one. - Finish — When done, run
stet finishto persist state and remove the review worktree.
You can also run start, run, and finish from the Cursor extension.
Install Ollama, then start the server and pull the suggested model:
ollama serve
ollama pull qwen3-coder:30b # or qwen2.5-coder:32b for a lighter optionOption 1 — Install script (Mac/Linux, recommended)
The script downloads the binary from GitHub Releases for your OS and architecture, or builds from source if you run it from the repo or set STET_REPO_URL.
curl -sSL https://raw.githubusercontent.com/stet/stet/main/install.sh | bashOption 2 — Windows (PowerShell)
You may need -ExecutionPolicy Bypass for the one-liner:
irm https://raw.githubusercontent.com/stet/stet/main/install.ps1 | iexOption 3 — From source
Clone the repo and run the install script (which will build and install), or build manually:
git clone https://github.com/stet/stet.git
cd stet
./install.shOr build manually and copy the binary to a directory in your PATH:
make build
cp bin/stet ~/.local/bin/Option 4 — Go install
go install github.com/stet/stet/cli/cmd/stet@latest(Use the repo’s module path once the project is published.)
PATH
The default install directory is ~/.local/bin (Mac/Linux) or %USERPROFILE%\.local\bin (Windows). Ensure it is in your PATH. For example, add to ~/.bashrc or ~/.zshrc: export PATH="$HOME/.local/bin:$PATH".
Stet can use any server that exposes an OpenAI-compatible chat/completions HTTP API (for example LM Studio’s local server, or other gateways). Configure it in .review/config.toml (or your global Stet config) or with environment variables:
| Setting | TOML key | Environment variable | Notes |
|---|---|---|---|
| Provider | provider = "openai" |
STET_PROVIDER=openai |
Default is ollama. |
| Base URL | openai_base_url = "http://127.0.0.1:1234/v1" |
STET_OPENAI_BASE_URL |
Include the /v1 prefix if your server uses it (LM Studio default is often port 1234). |
| Completion cap | max_completion_tokens = 4096 |
STET_MAX_COMPLETION_TOKENS |
Sent as OpenAI max_tokens (how many tokens the model may generate). Default 4096. This is separate from the context window (num_ctx, --context, STET_NUM_CTX), which sizes the prompt; large --context does not automatically raise the completion cap. |
Privacy and keys: Pointing at localhost keeps traffic on your machine (subject to that server’s behavior). Pointing at a remote vendor URL means prompts may leave your machine and you may need API keys as required by that server—Stet does not change those rules.
Full precedence and every key are documented in the CLI–Extension Contract.
| Command | Description |
|---|---|
stet doctor |
Verify Git and configured LLM reachability (Ollama or OpenAI-compat) |
stet skill |
Print Agent Skill Markdown for LLM integration (e.g. save as SKILL.md in .claude/skills/stet-integration/) |
stet benchmark |
Measure model throughput (tokens/s) for the configured model |
stet commitmsg |
Generate a conventional git commit message from uncommitted changes (local LLM); --commit to commit with it, --commit-and-review to commit then run review |
stet start [ref] |
Start review from baseline |
stet run |
Re-run incremental review |
stet rerun |
Re-run full review (all hunks) with same or overridden parameters; use --replace to overwrite previous findings; requires an active session |
stet finish |
Persist state, clean up; writes session note to refs/notes/stet for impact analytics |
stet status |
Show session status |
stet list |
List active findings with IDs (for use with dismiss) |
stet dismiss <id> [reason] |
Mark a finding as dismissed; optional reason: false_positive, already_correct, wrong_suggestion, out_of_scope |
stet cleanup |
Remove orphan stet worktrees |
stet optimize |
Run optional DSPy optimizer (history → optimized prompt) |
stet stats [volume|quality|energy] |
Aggregate impact metrics from notes and history |
stet --version |
Print installed version |
--nitpicky— Report style, typos, and grammar (config:nitpicky = trueorSTET_NITPICKY=1).--trace— Print internal steps to stderr for debugging (stet start --traceorstet run --trace).--timeout— Per-request timeout for long or large-context reviews (e.g.stet start --timeout 45m). See CLI–Extension Contract.stet benchmark --model MODEL— Benchmark a specific model instead of the configured one.stet benchmark --warmup— Run a warmup call before measuring (load model, discard metrics).
Use Stet inside Cursor (or VSCode) to view findings in a panel, jump to locations, copy to chat, and run "Finish review." The extension runs the CLI and displays results. Load the extension folder as an extension development workspace or install from a VSIX.
- On
stet finish, Stet records session metadata in a Git note (refs/notes/stet) for impact analytics and integration with tools like git-ai. - Product Requirements Document
- Implementation Plan
- CLI–Extension Contract — Configuration and tuning (including RAG and strictness).
- Review Process Internals — For contributors modifying the CLI.
- Review Quality — Actionable findings and prompt guidance.
MIT. See LICENSE.