The infinite machine behind the machines.
For all of recorded history, human progress has been constrained by one thing: the number of hours in a day multiplied by the number of people willing to work. We called it scarcity, and we built entire economic systems around managing it.
Then something changed.
Autonomous AI agents can now reason, plan, write code, open pull requests, and report back. The singularity isn't near. It's here. When agents can carry the full cognitive load of an org — planning, dependency modeling, implementation, review — humans get to operate at the level of ideas. We are moving from a world of scarcity into a world of superabundance.
AgentCeption is a bet on that future.
Brain dump → Structured plan → GitHub issues → Agent org tree → PRs → Merged
One input. Zero boilerplate. The work happens.
Paste anything. The LLM converts it into a PlanSpec: phases, issues, dependencies, acceptance criteria.
The YAML opens in an editor. Adjust anything. Click Create Issues to file everything on GitHub.
Configure the agent hierarchy before launching. Each node is either a Coordinator (surveys scope, assembles a team) or a Worker (does the work directly). Assign roles and cognitive architecture figures to each node, save a preset, or start from the default CEO → CTO → Engineering Manager / QA Lead → developer / reviewer tree.
Click Launch on an unlocked phase. The org tree springs to life: coordinators cascade work down to engineers, each operating in an isolated git worktree. PRs appear. Phases unlock. You watch.
Every agent has a cognitive architecture — a composed identity (historical thinkers + archetypes + skill domains + behavioral atoms) injected into its context. You are deploying reasoners, not LLM calls. This is the infrastructure for deploying judgment at scale.
Most AI coding tools are power tools. They make individual developers faster. AgentCeption is not a power tool. It is a force multiplier on the organizational unit itself — what would a brilliant 10-person team look like if the team had no size limit? The creative renaissance that has always been one good team away is now one brain dump away.
git clone https://github.com/cgcardona/agentception
cd agentception
cp .env.example .env
# Set ANTHROPIC_API_KEY, GITHUB_TOKEN, GH_REPO, HOST_WORKTREES_DIR
docker compose up -d
docker compose exec agentception alembic upgrade head
open http://localhost:1337Run agents entirely on your own hardware with Ollama. No API key, no cloud, no usage bill. Works on macOS, Linux, and Windows — GPU-accelerated on Apple Silicon (Metal), NVIDIA (CUDA), and AMD (ROCm).
# 1. Install Ollama — https://ollama.com/download
# macOS: brew install ollama && brew services start ollama
# Linux: curl -fsSL https://ollama.com/install.sh | sh
# Windows: download the installer from https://ollama.com/download
# Pull a model
ollama pull qwen2.5-coder:7b # fast, good quality (~4 GB)
# ollama pull qwen2.5-coder:32b # better quality, needs 16 GB+ RAM
# 2. Clone and configure
git clone https://github.com/cgcardona/agentception
cd agentception
cp .env.example .envThen set in .env:
LLM_PROVIDER=local
LOCAL_LLM_BASE_URL=http://host.docker.internal:11434
LOCAL_LLM_MODEL=qwen2.5-coder:7b
GITHUB_TOKEN=ghp_...
GH_REPO=owner/repo
HOST_WORKTREES_DIR=/path/to/worktrees# 3. Start
docker compose up -d
docker compose exec agentception alembic upgrade head
# macOS: open http://localhost:1337
# Linux/Windows: navigate to http://localhost:1337Performance tip: Set
WORKTREE_INDEX_ENABLED=falsein.envto skip per-agent code indexing (saves ~2 GB RSS and significant CPU) when running on constrained hardware.
See docs/guides/local-llm.md for the full Ollama setup guide and model recommendations.
| Variable | Required | Description |
|---|---|---|
GITHUB_TOKEN |
✅ | GitHub PAT with repo + issues scope |
GH_REPO |
✅ | Repo this instance manages — owner/repo |
HOST_WORKTREES_DIR |
✅ | Host path where agent worktrees are created |
DATABASE_URL |
✅ | PostgreSQL connection string (default in docker-compose.yml) |
LLM_PROVIDER |
— | anthropic (default) or local |
ANTHROPIC_API_KEY |
Cloud only | Required when LLM_PROVIDER=anthropic |
LOCAL_LLM_BASE_URL |
Local only | Ollama base URL, e.g. http://host.docker.internal:11434 |
LOCAL_LLM_MODEL |
Local only | Model tag, e.g. qwen2.5-coder:7b |
WORKTREE_INDEX_ENABLED |
— | true/false — enable per-agent code search (default false) |
See docs/guides/setup.md for the full first-run walkthrough.
Security note: By default all
/api/*endpoints are unauthenticated. If your machine is on a shared network (office LAN, cloud VM, dev box), setAC_API_KEYin.envbefore starting. Without it, anyone who can reach port 1337 can dispatch agents and burn your Anthropic credits. Generate a key withopenssl rand -hex 32.
AgentCeption exposes an MCP server that any MCP-compatible client can use:
{
"mcpServers": {
"agentception": {
"command": "docker",
"args": ["compose", "-f", "/path/to/agentception/docker-compose.yml",
"exec", "-T", "agentception", "python", "-m", "agentception.mcp.stdio_server"]
}
}
}See docs/guides/integrate.md for the full tool reference.
| Guide | What it covers |
|---|---|
| Setup | First-run, Docker, environment variables |
| Local LLM / Ollama | Running agents on local hardware with Ollama (macOS, Linux, Windows) |
| Local LLM Scaling | Multi-agent concurrency and LiteLLM proxy |
| MCP Integration | MCP client tool integration |
| Dispatching Agents | How to launch, monitor, and cancel agent runs |
| Developer Workflow | Bind mounts, mypy, tests, build pipeline |
| Contributing | Branch conventions, PR process, commit style |
| Reference | What it covers |
|---|---|
| API Routes | Every HTTP endpoint — semantic URL taxonomy |
| Cognitive Architecture | Figures, archetypes, skill domains, atoms |
| Type Contracts | Pydantic models, TypedDicts, layer contracts |
Python 3.12 · FastAPI · Jinja2 · HTMX · Alpine.js · SCSS · Pydantic v2 · SQLAlchemy (async) · Alembic · PostgreSQL · Qdrant
LLM backends: Anthropic (claude-sonnet-4-6, claude-opus-4-6) or any Ollama-compatible local model. Switch with a single env var — no code changes required.
MIT



