Lightweight single-process agent framework exposing both A2A and MCP on a single HTTP port.
Each agentling is a small, focused AI agent whose identity is defined by a YAML config file — name, description, system prompt, tools, and skills. The framework handles protocol compliance, conversation journaling, and context management. The LLM is the agent; the framework records and replays.
pip install -e ".[dev]"
# Create your agent definition
cp agent.example.yaml agent.yaml
# Run with mock LLM (no API key needed)
AGENT_CONFIG=./agent.yaml AGENT_LLM_BACKEND=mock AGENT_API_KEY=dev agentling
# Run with Anthropic
AGENT_CONFIG=./agent.yaml ANTHROPIC_API_KEY=sk-ant-... AGENT_API_KEY=your-key agentling
# See available tools
agentling --list-toolsThe agent serves:
GET /.well-known/agent-card.json— A2A Agent Card (public, no auth)POST /a2a— A2A JSON-RPC endpointPOST /mcp— MCP Streamable HTTP endpoint
Create /etc/systemd/system/agentling.service:
[Unit]
Description=Agentling
After=network.target
[Service]
Type=simple
User=agentling
WorkingDirectory=/opt/agentling
EnvironmentFile=/opt/agentling/.env
ExecStart=/opt/agentling/venv/bin/agentling
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target# Set up
sudo useradd -r -s /bin/false agentling
sudo mkdir -p /opt/agentling
sudo python3 -m venv /opt/agentling/venv
sudo /opt/agentling/venv/bin/pip install agentlings
# Copy your config
sudo cp agent.yaml /opt/agentling/agent.yaml
sudo cp .env /opt/agentling/.env # ANTHROPIC_API_KEY, AGENT_API_KEY, AGENT_CONFIG=./agent.yaml
# Start
sudo systemctl daemon-reload
sudo systemctl enable --now agentling
sudo journalctl -u agentling -fCreate ~/Library/LaunchAgents/com.donkeywork.agentling.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.donkeywork.agentling</string>
<key>ProgramArguments</key>
<array>
<string>/path/to/venv/bin/agentling</string>
</array>
<key>WorkingDirectory</key>
<string>/path/to/agentling</string>
<key>EnvironmentVariables</key>
<dict>
<key>AGENT_CONFIG</key>
<string>./agent.yaml</string>
<key>ANTHROPIC_API_KEY</key>
<string>sk-ant-...</string>
<key>AGENT_API_KEY</key>
<string>your-key</string>
</dict>
<key>KeepAlive</key>
<true/>
<key>StandardErrorPath</key>
<string>/tmp/agentling.err</string>
</dict>
</plist>launchctl load ~/Library/LaunchAgents/com.donkeywork.agentling.plist
tail -f /tmp/agentling.errAgent identity lives in a YAML file (agent.yaml):
name: k3s-agentling
description: A k3s cluster management agent
tools:
- bash
- filesystem
skills:
- id: k8s-ops
name: Kubernetes Operations
description: Manage cluster resources, diagnose issues, apply manifests
tags: [kubernetes, k3s, devops]
- id: file-management
name: File Management
description: Read, write, and search configuration files
tags: [files, yaml]
system_prompt: |
You are a DevOps engineer managing a k3s Kubernetes cluster.
All configuration changes go through /mnt/lab/k3s as the source of truth.
Never use kubectl patch/edit/set directly — write manifests and apply them.
Before any destructive operation, describe the impact and ask for confirmation.Point to it with AGENT_CONFIG=./agent.yaml.
| Group | Tools | Description |
|---|---|---|
bash |
bash |
Shell command execution with timeout |
filesystem |
read_file, write_file, edit_file, list_directory, search_files |
File operations with offset/limit, find-and-replace, glob search |
memory |
memory_edit |
Read and write the agent's persistent long-term memory |
Tools are off by default. Run agentling --list-tools for details.
docker build -t agentling:latest .
docker run -e AGENT_API_KEY=your-key -e AGENT_LLM_BACKEND=mock -p 8420:8420 agentlingSecrets and runtime settings stay in env vars (or .env file):
| Variable | Default | Description |
|---|---|---|
AGENT_CONFIG |
— | Path to agent YAML definition |
ANTHROPIC_API_KEY |
— | Anthropic API key (required for real LLM) |
AGENT_API_KEY |
— | API key for authenticating clients |
AGENT_MODEL |
claude-sonnet-4-6 |
Anthropic model ID |
AGENT_MAX_TOKENS |
4096 |
Max tokens per LLM response |
AGENT_HOST |
0.0.0.0 |
Bind address |
AGENT_PORT |
8420 |
Bind port |
AGENT_DATA_DIR |
./data |
JSONL journal storage directory |
AGENT_LOG_LEVEL |
INFO |
Log level |
AGENT_LLM_BACKEND |
anthropic |
anthropic or mock |
AGENT_EXTERNAL_URL |
— | Public URL for Agent Card (needed in Docker/k8s) |
AGENT_OTEL_ENDPOINT |
— | OpenTelemetry collector endpoint |
AGENT_OTEL_PROTOCOL |
http |
Collector protocol (http or grpc) |
AGENT_OTEL_INSECURE |
true |
Disable TLS for collector connection |
AGENT_OTEL_HEADERS |
— | Comma-separated key=value pairs for collector auth |
Agentlings can maintain persistent long-term memory — a curated set of key-value facts that survive across conversations. Memory transforms an agent from a tool that forgets into one that learns.
Memory is a JSON file (data/memory/memory.json) containing entries like:
{
"entries": [
{
"key": "cluster-node-count",
"value": "4 nodes: node1 (control), node2-4 (workers)",
"recorded": "2026-04-01T10:00:00Z"
}
]
}The memory block is injected into the system prompt on every LLM call, between the agent's identity and the conversation history. The agent sees its accumulated knowledge as working context, not as a separate tool call.
When the memory tool group is enabled, the agent gets a memory_edit tool with three operations:
| Operation | Description |
|---|---|
set |
Upsert an entry by key. Updates the timestamp. |
remove |
Delete an entry by key. |
list |
Return all current entries. |
The agent decides what to remember based on its system prompt. A DevOps agent might store cluster topology and known issues. A support agent might store escalation paths and recurring problems.
# Show current memory
agentling memory showmemory:
token_budget: 2000 # max tokens for the memory block in the system prompt
# injection_prompt: null # override the memory/data-dir-awareness templateThe sleep cycle is a nightly process that journals the day's activity, consolidates new knowledge into memory, prunes stale entries, and cleans up old files. It maps to biological sleep phases.
graph LR
L[Light Sleep<br/>Gate check] --> D[Deep Sleep<br/>Replay & journal]
D --> R[REM<br/>Integrate & prune]
R --> H[Housekeeping<br/>Retention cleanup]
Quick check: were there any conversations today? If not, skip everything. No LLM calls, no cost.
For each conversation from today, the sleep cycle reads the JSONL journal from the last compaction marker and submits all summaries as a single batch request to the Anthropic Message Batches API. Batch processing runs at 50% cost and processes in parallel.
Each summary call receives the agent's system prompt (so the agent's persona shapes what it considers important), current memory, and the conversation content. The LLM returns a structured ConversationSummary with a narrative and memory candidates.
Results are written to data/journals/YYYY-MM-DD.md.
A single LLM call receives current memory, today's journal, and all extracted memory candidates. It integrates new facts, deduplicates, reviews existing entries for staleness, and returns a ConsolidatedMemory — the complete updated memory store. Written atomically to memory.json.
Deletes conversation JSONL files older than conversation_retention_days and journal files older than journal_retention_days.
sleep:
schedule: "0 2 * * *" # cron expression (default: 2am daily)
journal_retention_days: 30 # keep journals for 30 days
conversation_retention_days: 14 # keep JSONL conversations for 14 days
memory_max_entries: 50 # hard cap after consolidation
# model: null # override model for sleep calls
# summary_prompt: null # override per-conversation summary prompt
# consolidation_prompt: null # override REM consolidation prompt# Trigger sleep cycle manually
agentling sleep --date 2026-04-01data/
abc123.jsonl # conversation journals (flat, as before)
def456.jsonl
memory/
memory.json # persistent memory store
journals/
2026-04-01.md # daily sleep journal
2026-04-02.md
The agent is told about this directory structure and can use its filesystem tools to search past journals and conversation logs for context beyond what fits in memory.
The sleep cycle and memory tool emit spans and metrics to an OpenTelemetry collector when telemetry is enabled.
telemetry:
enabled: true
endpoint: "http://otel-collector:4318"
protocol: "http" # "http" or "grpc"
service_name: "agentling"
insecure: true
headers: # optional auth headers
Authorization: "Bearer your-token"Or via env vars: AGENT_OTEL_ENDPOINT=http://collector:4318 AGENT_OTEL_HEADERS="Authorization=Bearer tok".
When telemetry is disabled (the default) or the OpenTelemetry packages are not installed, all instrumentation is a no-op.
graph TB
A2A[A2A Client] -->|POST /a2a| A2ASDK[a2a-sdk Server]
MCP[MCP Client] -->|POST /mcp| MCPSDK[mcp SDK Server]
A2ASDK --> Executor[AgentlingExecutor]
Executor --> Loop[MessageLoop]
MCPSDK --> Loop
Loop --> Store[JSONL Store]
Loop --> LLM[LLM Client]
Loop --> Tools[Tool Registry]
Both protocols feed into a single MessageLoop.process_message() entrance. Conversations are persisted as append-only JSONL journals with compaction markers as replay cursors.
# Unit tests (no network, no LLM)
pytest tests/unit/ -v
# Integration tests (starts real server with mock LLM)
pytest tests/integration/ -v
# All tests
pytest tests/ -vIntegration tests use native SDK clients — a2a-sdk ClientFactory for A2A and mcp ClientSession for MCP — talking to a real server over HTTP. All LLM responses are mocked.

