Skip to content

cgcardona/agentception

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,175 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AgentCeption

CI License: MIT

The infinite machine behind the machines.

For all of recorded history, human progress has been constrained by one thing: the number of hours in a day multiplied by the number of people willing to work. We called it scarcity, and we built entire economic systems around managing it.

Then something changed.

Autonomous AI agents can now reason, plan, write code, open pull requests, and report back. The singularity isn't near. It's here. When agents can carry the full cognitive load of an org — planning, dependency modeling, implementation, review — humans get to operate at the level of ideas. We are moving from a world of scarcity into a world of superabundance.

AgentCeption is a bet on that future.

Brain dump → Structured plan → GitHub issues → Agent org tree → PRs → Merged

One input. Zero boilerplate. The work happens.


How It Works

Step 1 — Plan

Paste anything. The LLM converts it into a PlanSpec: phases, issues, dependencies, acceptance criteria.

Phase 1A — natural language input becomes a structured plan spec

Step 2 — Review

The YAML opens in an editor. Adjust anything. Click Create Issues to file everything on GitHub.

Phase 1B — review and edit the generated plan spec before launching

Step 3 — Design the org

Configure the agent hierarchy before launching. Each node is either a Coordinator (surveys scope, assembles a team) or a Worker (does the work directly). Assign roles and cognitive architecture figures to each node, save a preset, or start from the default CEO → CTO → Engineering Manager / QA Lead → developer / reviewer tree.

Step 3 — Agent Org designer showing CEO → CTO → Engineering Manager / QA Lead → developer / reviewer tree

Step 4 — Ship

Click Launch on an unlocked phase. The org tree springs to life: coordinators cascade work down to engineers, each operating in an isolated git worktree. PRs appear. Phases unlock. You watch.

Step 4 — Mission Control tracks issues across TODO → ACTIVE → PR OPEN → REVIEWING → DONE

Every agent has a cognitive architecture — a composed identity (historical thinkers + archetypes + skill domains + behavioral atoms) injected into its context. You are deploying reasoners, not LLM calls. This is the infrastructure for deploying judgment at scale.

Most AI coding tools are power tools. They make individual developers faster. AgentCeption is not a power tool. It is a force multiplier on the organizational unit itself — what would a brilliant 10-person team look like if the team had no size limit? The creative renaissance that has always been one good team away is now one brain dump away.


Quick Start

Option A — Cloud (Anthropic)

git clone https://github.com/cgcardona/agentception
cd agentception
cp .env.example .env
# Set ANTHROPIC_API_KEY, GITHUB_TOKEN, GH_REPO, HOST_WORKTREES_DIR
docker compose up -d
docker compose exec agentception alembic upgrade head
open http://localhost:1337

Option B — Local models (free, private)

Run agents entirely on your own hardware with Ollama. No API key, no cloud, no usage bill. Works on macOS, Linux, and Windows — GPU-accelerated on Apple Silicon (Metal), NVIDIA (CUDA), and AMD (ROCm).

# 1. Install Ollama — https://ollama.com/download
#    macOS:   brew install ollama && brew services start ollama
#    Linux:   curl -fsSL https://ollama.com/install.sh | sh
#    Windows: download the installer from https://ollama.com/download

# Pull a model
ollama pull qwen2.5-coder:7b      # fast, good quality (~4 GB)
# ollama pull qwen2.5-coder:32b   # better quality, needs 16 GB+ RAM

# 2. Clone and configure
git clone https://github.com/cgcardona/agentception
cd agentception
cp .env.example .env

Then set in .env:

LLM_PROVIDER=local
LOCAL_LLM_BASE_URL=http://host.docker.internal:11434
LOCAL_LLM_MODEL=qwen2.5-coder:7b
GITHUB_TOKEN=ghp_...
GH_REPO=owner/repo
HOST_WORKTREES_DIR=/path/to/worktrees
# 3. Start
docker compose up -d
docker compose exec agentception alembic upgrade head
# macOS: open http://localhost:1337
# Linux/Windows: navigate to http://localhost:1337

Performance tip: Set WORKTREE_INDEX_ENABLED=false in .env to skip per-agent code indexing (saves ~2 GB RSS and significant CPU) when running on constrained hardware.

See docs/guides/local-llm.md for the full Ollama setup guide and model recommendations.


Environment Variables

Variable Required Description
GITHUB_TOKEN GitHub PAT with repo + issues scope
GH_REPO Repo this instance manages — owner/repo
HOST_WORKTREES_DIR Host path where agent worktrees are created
DATABASE_URL PostgreSQL connection string (default in docker-compose.yml)
LLM_PROVIDER anthropic (default) or local
ANTHROPIC_API_KEY Cloud only Required when LLM_PROVIDER=anthropic
LOCAL_LLM_BASE_URL Local only Ollama base URL, e.g. http://host.docker.internal:11434
LOCAL_LLM_MODEL Local only Model tag, e.g. qwen2.5-coder:7b
WORKTREE_INDEX_ENABLED true/false — enable per-agent code search (default false)

See docs/guides/setup.md for the full first-run walkthrough.

Security note: By default all /api/* endpoints are unauthenticated. If your machine is on a shared network (office LAN, cloud VM, dev box), set AC_API_KEY in .env before starting. Without it, anyone who can reach port 1337 can dispatch agents and burn your Anthropic credits. Generate a key with openssl rand -hex 32.


MCP Integration

AgentCeption exposes an MCP server that any MCP-compatible client can use:

{
  "mcpServers": {
    "agentception": {
      "command": "docker",
      "args": ["compose", "-f", "/path/to/agentception/docker-compose.yml",
               "exec", "-T", "agentception", "python", "-m", "agentception.mcp.stdio_server"]
    }
  }
}

See docs/guides/integrate.md for the full tool reference.


Documentation

Guide What it covers
Setup First-run, Docker, environment variables
Local LLM / Ollama Running agents on local hardware with Ollama (macOS, Linux, Windows)
Local LLM Scaling Multi-agent concurrency and LiteLLM proxy
MCP Integration MCP client tool integration
Dispatching Agents How to launch, monitor, and cancel agent runs
Developer Workflow Bind mounts, mypy, tests, build pipeline
Contributing Branch conventions, PR process, commit style
Reference What it covers
API Routes Every HTTP endpoint — semantic URL taxonomy
Cognitive Architecture Figures, archetypes, skill domains, atoms
Type Contracts Pydantic models, TypedDicts, layer contracts

Stack

Python 3.12 · FastAPI · Jinja2 · HTMX · Alpine.js · SCSS · Pydantic v2 · SQLAlchemy (async) · Alembic · PostgreSQL · Qdrant

LLM backends: Anthropic (claude-sonnet-4-6, claude-opus-4-6) or any Ollama-compatible local model. Switch with a single env var — no code changes required.


License

MIT

About

AgentCeption — multi-agent orchestration system for AI-powered development workflows

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors