std::slop is a persistent, SQLite-driven C++ CLI agent. It remembers your work through per-session ledgers, providing long-term recall, structured state management. std::slop features built-in Git integration. It's goal is to be an agent for which the context and its use fully transparent and configurable.
- 🎭 Personas & Skills: Define global agent instructions via
AGENTS.mdand extend capabilities using modular, on-demandSKILL.mdfiles. - 📖 Ledger-Driven: All interactions and tool calls are stored in SQLite for persistence and auditability.
- 📝 Session Scratchpad: Maintain a per-session planning buffer with
/scratchpad edit,/scratchpad save, andread_scratchpad/write_scratchpadtools. - 🎛️ Context Control: Granular control over conversation history via SQL-backed retrieval and rolling windows. As the context is built per-session, you can create multiple sessions and even clone existing ones to go down different paths.
- 📬 Mail Model: A patch-based iteration workflow for complex features. Patches are prepared on a staging branch, reviewed as atomic units, and only finalized after approval. Use this if you want a clean, bisect-safe view of changes, and want to be 'in the loop'. You can, of course offload the reviews to a
code_reviewerskill as well. - 🤖 Multi-Model: Supports Google Gemini and OpenAI-compatible APIs (OpenRouter, etc.) and OpenAI Responses API (with chatgpt plus/pro oauth).
- 📣 Hotwords: Quick, single-turn skill activation using
hey <skill> <query>syntax. Eg: "hey code_reviewer review these patches".
The project ships Linux x86-64 and OSX binaries every release. You can directly use them.
- C++17 compiler (Clang/GCC)
- Bazel (Bazelisk recommended)
- Git: Targets must be valid git repositories. Usually, a git add and an initial commit is sufficient to trigger all the git enabled features.
# Build the binary
bazel build //:std_slop
# Optional: Add to your PATH
cp ./bazel-bin/std_slop /usr/local/bin/std::slop works best when it can track a specific project. Initialize a git repository and run it from the root:
mkdir my-project && cd my-project
git init
std_slopFor quick one-off tasks, you can use Batch Mode:
std_slop --prompt "Refactor main.cpp to remove all unused includes" Batch mode also takes in --model which is useful to specify the model to use and --session which is useful to indicate the session the prompt should be executed under. Batch mode works off an in memory sqlite db. If you want the db persisted you can point it to a DB with the --prompt-db argument.
/commands are also supported.
Read the User Guide for a detailed understanding of how to use std_slop, or Walkthrough to start with something simple.
- OpenAI OAuth (Responses API):
./slop_auth.sh chatgpt-plusthen run with--openai_oauth. You have to grab this script from the repo here
You can configure std::slop using environment variables or a configuration file.
The agent looks for a configuration file at ~/.config/slop/config.ini. You can also specify a custom path using the --config flag.
It is STRONGLY RECOMMENDED that slop.db lies in a central directory or outside the codebase. It generates 2 other artifact files, at least ensure that
your .gitignore contains this. The context ledger is completely stored in the database, and it can inadvertently capture information from your environment if you are not careful. Eg Environment Variables.
[slop]
model = gemini-3-flash-preview
# OR
openai_api_key = sk-...
openai_base_url = https://api.openai.com/v1
# use_responses = true # optional: use OpenAI Responses API with API key mode
# openai_oauth = true # optional: use OpenAI OAuth token + Responses API
# openai_oauth_token_path = /custom/path/chatgpt_plus_token.jsonSee docs/example_config.ini for a full list of options.
SLOP_DEBUG_HTTP=1: Enable full verbose logging of all HTTP traffic (headers & bodies).
- C++ Standard: C++17.
- Style: Google C++ Style Guide.
- Exceptions: Disabled (-fno-exceptions).
- Memory: RAII and std::unique_ptr exclusively.
- Error Handling: absl::Status and absl::StatusOr.
- Asan and Tsan clean at all times.
- Personas & Skills: Understanding global context injection and modular skills.
- User Guide: Detailed commands and workflow tips.
- Architecture & Schema: Understanding the database-driven engine.
- Sessions: How context isolation and management work.
- Context Management: The history and strategy for managing model memory.
- Walkthrough: A step-by-step example of using the agent.
- Contributing: Code style, formatting, and linting guidelines.
The core logic is divided into modules:
database.h: Manages the SQLite-backed ledger. Handles persistence for messages, memos, tools, and skills.tool_dispatcher.h: Implements a thread-safe execution engine. It dispatches multiple tool calls concurrently while ensuring results are returned in the proper order for the LLM.cancellation.h: Provides a mechanism for interrupting tasks. It supports registering callbacks to kill shell processes or abort HTTP requests.orchestrator.h: high-level interface for model interaction. Implementations for Gemini and OpenAI manage history windowing and response parsing.shell_util.h: Executes shell commands in a separate process group, with support for live output polling and termination on cancellation.http_client.h: A minimalist, cancellation-aware HTTP client used for all model API calls.
interface/: Implements the terminal UI. The UI is minimal but clean, uses readline for user input, color codes and ASCII Codes.markdown/: Usestree-sitter-markdownto provide syntax highlighting (C++, Python, Go, JS, Rust, Bash) and structured rendering for agent responses. This is a stand alone Markdown parser / renderer library in C++.main.cpp: The primary event loop. Coordinates between the Orchestrator, ToolDispatcher, and UI.
