A self-compressing s-expression memory agent. Common Lisp backend, React web frontend.
The agent's mind is an s-expression store that learns its own vocabulary through compression. Talk to it in natural language — it translates to structured memories, discovers patterns via anti-unification, and grows a formal vocabulary over time. Watch the MDL score drop as it gets smarter.
# Prerequisites: SBCL, Quicklisp, Node.js
# Install web dependencies
cd web && npm install && cd ..
# Run everything
./start.shOpen http://localhost:5174. Name your agent, watch it hatch, start talking.
Browser (React + TypeScript) SBCL (Common Lisp)
┌──────────────────────┐ ┌──────────────────────┐
│ Terminal │ WebSocket │ Memory Engine │
│ Rule Inspector │◄────────────►│ term, match, unify │
│ Memory Map │ s-expressions│ lgg, normalize, mdl │
│ Agent HUD (ASCII art) │ │ store, completion │
│ LLM Client (z.ai) │ │ classify, retire │
└──────────────────────┘ └──────────────────────┘
- CL backend owns all memory state. Pure s-expression protocol over WebSocket.
- Web frontend renders the UI and handles LLM calls (z.ai API).
- LLM translates between natural language and s-expression tool calls.
- All messages on the wire are s-expressions. No JSON.
Self-compressing memory: Store facts as s-expressions. The MDL (Minimum Description Length) scorer finds recurring patterns via anti-unification (Plotkin's LGG algorithm), mints new constructor symbols, and rewrites memories into compressed form. The agent literally grows its own vocabulary.
Critical pair analysis: When new rules are learned, the system computes critical pairs (Knuth-Bendix style) and classifies them:
- Regime 1: aliases (auto-resolved)
- Regime 2: true contradictions (escalated to human)
- Regime 3: temporal supersession (later wins)
Animated ASCII agent: A lispy creature made of parentheses. Blinks, floats, bounces when it learns a rule, glows green on new memories, shakes red on contradictions.
LLM integration: Plain text goes to the LLM (z.ai/GLM models), s-expressions go directly to CL. The LLM uses tools (store, query, fetch, shell, read/write files) validated with Zod schemas.
cl/ Common Lisp backend
lisp-agent.asd ASDF system definition
src/
term.lisp s-expr types, parser, writer, substitution
match.lisp one-way pattern matching
unify.lisp Robinson unification with occurs-check
lgg.lisp anti-unification (Plotkin's algorithm)
normalize.lisp innermost/leftmost rewriting
mdl.lisp MDL scorer, greedy rule learner
store.lisp hash-consed append-only store
completion.lisp critical pair computation
classify.lisp CP classification (4 regimes)
stratified.lisp time-stratified normalization
retire.lisp rule retirement
protocol.lisp WebSocket command dispatch + journal
server.lisp Hunchentoot + hunchensocket
test/ FiveAM test suite (102 checks)
web/ React frontend
src/
App.tsx root: onboarding gate, LLM orchestration
agent.ts ASCII art assembly, derived stats, persistence
llm.ts z.ai API client, system prompt
tools.ts Zod schemas, validation, sexp translation
settings.ts localStorage config
sexp.ts s-expression reader/writer
ws.ts WebSocket client with reconnect
components/
Terminal.tsx input/output terminal
RuleInspector.tsx rules, CPs, sigma tabs
MemoryMap.tsx live memory listing
AgentHud.tsx animated ASCII agent + personality bars
AgentCard.tsx full stat card modal
Onboarding.tsx name, hatch, card screens
Settings.tsx LLM config modal
Terminal 1 — CL backend:
cd /workspace/lisp-agent
sbcl --eval '(push #p"cl/" asdf:*central-registry*)' \
--eval '(ql:quickload :lisp-agent)' \
--eval '(lisp-agent.server:start-server :port 8080)'Terminal 2 — Web frontend:
cd web && npm run devClick the gear icon in the UI. Enter your z.ai API key, select a model (GLM-5, GLM-5-Turbo, or GLM-4.7-Flash), save.
Plain text input goes to the LLM. Input starting with ( goes directly to the CL memory engine.
Memories persist via a journal file (store.journal). Every store command is appended. On restart, the journal is replayed to rebuild state.
- Memory System Design — the original design doc
- System Architecture Spec
- LLM Integration Spec
- Agent Onboarding Spec
MIT