BrainDrive is a terminal-first proof of the Personal AI Architecture: your AI runs against your files, in your Docker container, with your model choice. The durable output is plain markdown in a library folder you own.
- Docker
Recommended path: OpenRouter for the best model quality.
docker run -it \
-v ~/braindrive-library:/library \
-e BRAINDRIVE_MODEL_URL=https://openrouter.ai/api/v1 \
-e BRAINDRIVE_MODEL_NAME=anthropic/claude-sonnet-4 \
-e BRAINDRIVE_API_KEY=sk-or-your-key \
braindrive/braindriveOn first run, BrainDrive initializes the mounted library as a local git repo, greets you, and asks what area of life or work you want to organize first.
For a fully local setup, run Ollama on the host and point BrainDrive at its OpenAI-compatible endpoint:
docker run -it \
-v ~/braindrive-library:/library \
-e BRAINDRIVE_MODEL_URL=http://host.docker.internal:11434/v1 \
-e BRAINDRIVE_MODEL_NAME=llama3:8b \
braindrive/braindriveIf you are on Linux, add --add-host=host.docker.internal:host-gateway to the docker run command. BrainDrive does not require BRAINDRIVE_API_KEY for local Ollama models.
For a reusable local setup, copy .env.example to .env, edit the values, then run:
docker compose up --buildUse BRAINDRIVE_LIBRARY_HOST_PATH in .env to control which host folder is mounted into the container.
The prompt flow lives in docs/example-prompts:
- Interview: discover the topic and produce structured notes.
- Spec Generation: turn those notes into a reviewable spec.
- Plan Generation: turn the approved spec into an execution plan.
The intended loop is interview -> spec -> plan, with BrainDrive showing drafts in chat first and only saving after approval.
BrainDrive reads configuration with this precedence: environment variables in the container or .env -> library config -> defaults.
| Variable | Required | Purpose |
|---|---|---|
BRAINDRIVE_MODEL_URL |
Yes | OpenAI-compatible base URL such as OpenRouter or Ollama |
BRAINDRIVE_MODEL_NAME |
Yes | Model name to request from that provider |
BRAINDRIVE_API_KEY |
Cloud only | API key for providers that require auth |
BRAINDRIVE_LIBRARY_PATH |
No | In-container library path, defaults to /library |
BRAINDRIVE_LIBRARY_HOST_PATH |
Compose only | Host folder to mount into /library |
Library config lives at <library>/.braindrive/config.json and is for durable preferences like model URL and model name. API keys should stay in environment variables or .env, not in the mounted library.
- Creates and switches between topic folders through conversation
- Writes plain markdown documents with approval before any file change
- Uses git for local version history after approved writes
- Supports paste-in workflows for interview -> spec -> plan
- Works with OpenAI-compatible providers including OpenRouter and Ollama
- Starts fresh on terminal reopen while keeping all library files on disk
BrainDrive MVP does not include conversation persistence, a web UI, multi-terminal coordination, or a skill system yet. See docs/mvp-spec.md for the current MVP scope and limits.
Contribution notes are in CONTRIBUTING.md. Issues, bug reports, documentation fixes, and UX feedback are all useful.
MIT. See LICENSE.