Open-source AI agent framework with a visual workflow editor, self-hosted inference, and one-click deployment
π Website Β· π X (Twitter) Β· π¬ Telegram
Obelisk Core is an open-source framework for building, running, and deploying AI agents. Design workflows visually, connect to a self-hosted LLM, and deploy autonomous agents β all from your own hardware.
Status: π‘ Alpha β v0.2.0-alpha
Obelisk Core uses several services that work together:
ββββββββββββββββββββββββββββββββββββ
β Visual Workflow Editor β β Browser UI (Next.js)
β Design agent workflows with β Build, test, and deploy
β drag-and-drop nodes β workflows visually
ββββββββββββββββ¬ββββββββββββββββββββ
β executes
ββββββββββββββββΌββββββββββββββββββββ
β TypeScript Execution Engine β β Agent Runtime (Node.js)
β Runs workflows as autonomous β Nodes: inference, Telegram,
β agents in Docker containers β memory, scheduling, Clanker, Polymarket, etc.
ββββββββββββββββ¬ββββββββββββββββββββ
β calls
βββββββββββ΄ββββββββββ¬ββββββββββββββββββ¬βββββββββββββββββββ
βΌ βΌ βΌ βΌ
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ ββββββββββββββββββββ
β Inference β β Blockchain β β Polymarket β β Deployment API β
β Service β β Service β β Service β β (Agents) β
β (Python) β β (Clanker) β β (Orders, β β Build, deploy, β
β Qwen3 local β β State, V4 β β Redeem, β β manage agents β
β or Router β β swaps β β Snapshot) β β β
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ ββββββββββββββββββββ
Services:
- Inference Service β Python FastAPI server with self-hosted Qwen3-0.6B, or use the Router Service (https://router.theobelisk.ai) for hosted LLMs (e.g. Mistral). In the Inference Config node, set
endpoint_urltohttps://router.theobelisk.ai(canonical default). If your router is behind a path-based proxy or the service docs specify a/v1base path, usehttps://router.theobelisk.ai/v1instead. Setagent_id(e.g.clawballs) for the agent to use. - Blockchain Service β Clanker state API, launch summary, V4 swaps (CabalSwapper); workflows read token/pool data and execute buys/sells
- Polymarket Service β CLOB orders, redeem positions, market snapshot, probability model; used by Polymarket Sniper workflows
- Deployment Layer β Deploy workflows as Docker agents from the UI; manage running agents at
/deployments
The Deployment API (build, deploy, manage agents) is separate from the PM2-managed group: PM2 starts/stops only core, inference, blockchain, and polymarket. The Deployment API must be deployed and managed outside PM2. When self-hosting it, configure the service with the required settings (e.g. base URL, authentication tokens if applicable) and run it on a standalone VM, in a container (Docker), or on Kubernetes. The UI expects the deployment service at the URL configured in your environment (e.g. api.theobelisk.ai in production). See docker/README.md for agent container and deploy endpoint details.
The UI is a visual node editor (like ComfyUI). The Execution Engine is a TypeScript runtime that processes workflows node-by-node and runs agents in Docker containers.
- Visual Workflow Editor β Drag-and-drop node-based editor to design agent logic
- Self-Hosted LLM β Qwen3-0.6B with thinking mode, no external API required; or use Router Service (https://router.theobelisk.ai) to hook up Mistral or other hosted LLMs via Inference Config (
endpoint_url:https://router.theobelisk.ai,agent_id: e.g.clawballs) - Autonomous Agents β Deploy workflows as long-running Docker containers
- Telegram Integration β Listener and sender nodes for building Telegram bots
- Conversation Memory β Persistent memory with automatic summarization
- Binary Intent β Yes/no decision nodes for conditional workflow logic
- Wallet Authentication β Privy-based wallet connect for managing deployed agents
- Clanker / Blockchain β Blockchain service (obelisk-blockchain), Blockchain Config node, Clanker Launch Summary, Wallet, Clanker Buy/Sell (V4 swaps via CabalSwapper), Action Router; onSwap trigger (last_swap.json) for Bag Checker (profit/stop-loss) β Clanker Sell
- Polymarket β Polymarket service (polymarket-service): CLOB orders, redeem, snapshot, probability model; Polymarket Sniper template and nodes
- Scheduling β Cron-like scheduling nodes for periodic tasks
- One-Click Deploy β Deploy agents from the UI with environment variable injection
- Node.js 20+ and npm
- Python 3.10+ β a CUDA-capable GPU is required only for local self-hosted Qwen inference; not required when using Router-hosted LLMs (e.g. https://router.theobelisk.ai)
- Docker (for running deployed agents)
git clone https://github.com/ohnodev/obelisk-core.git
cd obelisk-coreThe inference service hosts the LLM model and serves it via REST API. Skip this step if you use the Router service (https://router.theobelisk.ai) for hosted LLMs; a GPU is only required for local self-hosted Qwen inference.
# Create Python venv and install dependencies
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Configure (optional β defaults work for local dev)
cp .env.example .env
# Edit .env if you want to set an API key or change the port
# Start the inference service
python3 -m uvicorn src.inference.server:app --host 127.0.0.1 --port 7780The first run downloads the Qwen3-0.6B model (~600MB). Once running, test it:
curl http://localhost:7780/healthFor Clanker or Polymarket workflows you need the blockchain and polymarket services. For local dev that only uses the default Telegram/inference flow, you can skip this step.
Option A β PM2 (recommended): start all services including blockchain and polymarket:
./pm2-manager.sh startOption B β Without PM2: start each service from its directory (see blockchain-service/README.md and polymarket-service/README.md). For example, from the repo root: build and run the blockchain service on port 8888 and the polymarket service on port 1110.
cd ts
npm install
npm run build
cd ..cd ui
npm install
npm run devOpen http://localhost:3000 in your browser. You should see the visual workflow editor.
- The default workflow is pre-loaded β it includes a Telegram bot setup
- Click Queue Prompt (βΆ) to execute the workflow
- The output appears in the output nodes on the canvas
We provide a pm2-manager.sh script that manages all services (core, inference, blockchain, polymarket):
# Start everything
./pm2-manager.sh start
# Restart services (clears logs)
./pm2-manager.sh restart
# Stop everything
./pm2-manager.sh stop
# View status
./pm2-manager.sh status
# View logs
./pm2-manager.sh logsPM2 keeps the core API, inference, blockchain, and polymarket services running, auto-restarts on crashes, and manages log files.
Agents are workflows packaged into Docker containers that run autonomously.
docker build -t obelisk-agent:latest -f docker/Dockerfile .- Connect your wallet in the UI toolbar
- Design your workflow (or use the default)
- Click Deploy β the UI sends the workflow to your deployment service
- The agent runs in a Docker container on your machine
- Manage running agents at
/deployments
When running agents in Docker, the container must reach host services. Set INFERENCE_SERVICE_URL, BLOCKCHAIN_SERVICE_URL, and POLYMARKET_SERVICE_URL to point at the host (e.g. host.docker.internal with the appropriate ports). On native Linux, host.docker.internal is not defined by default β add --add-host=host.docker.internal:host-gateway so it resolves. Docker Compose users: add extra_hosts: ["host.docker.internal:host-gateway"] to the service for the same effect.
docker run -d \
--add-host=host.docker.internal:host-gateway \
--name my-agent \
-e WORKFLOW_JSON='<your workflow JSON>' \
-e AGENT_ID=agent-001 \
-e AGENT_NAME="My Bot" \
-e INFERENCE_SERVICE_URL=http://host.docker.internal:7780 \
-e BLOCKCHAIN_SERVICE_URL=http://host.docker.internal:8888 \
-e POLYMARKET_SERVICE_URL=http://host.docker.internal:1110 \
-e TELEGRAM_BOT_TOKEN=your_token \
obelisk-agent:latestSee docker/README.md for full details on environment variables, resource limits, and Docker Compose.
| Node | Description |
|---|---|
| Text | Static text input/output |
| Inference | Calls the LLM via the inference service |
| Inference Config | Configures model parameters (temperature, max tokens, thinking mode) |
| Binary Intent | Yes/no classification for conditional logic |
| Telegram Listener | Polls for incoming Telegram messages |
| TG Send Message | Sends messages via Telegram Bot API (supports quote-reply) |
| Memory Creator | Creates conversation summaries |
| Memory Selector | Retrieves relevant memories for context |
| Memory Storage | Persists memories to storage |
| Telegram Memory Creator | Telegram-specific memory summarization |
| Telegram Memory Selector | Telegram-specific memory retrieval |
| Scheduler | Cron-based scheduling for periodic execution |
obelisk-core/
βββ src/inference/ # Python inference service (FastAPI + PyTorch)
β βββ server.py # REST API server
β βββ model.py # LLM loading and generation
β βββ queue.py # Async request queue
β βββ config.py # Inference configuration
βββ ts/ # TypeScript execution engine
β βββ src/
β β βββ core/ # Workflow runner, node execution
β β β βββ execution/
β β β βββ runner.ts
β β β βββ nodes/ # All node implementations
β β βββ utils/ # JSON parsing, logging, etc.
β βββ tests/ # Vitest test suite
βββ blockchain-service/ # Clanker state API, block processing, V4 swaps
βββ polymarket-service/ # CLOB orders, redeem, market snapshot, probability model
βββ ui/ # Next.js visual workflow editor
β βββ app/ # Pages (editor, deployments)
β βββ components/ # React components (Canvas, Toolbar, nodes)
β βββ lib/ # Utilities (litegraph, wallet, API config)
βββ docker/ # Dockerfile and compose for agent containers
βββ pm2-manager.sh # PM2 process manager (core, inference, blockchain, polymarket)
βββ requirements.txt # Python deps (inference service only)
βββ .env.example # Environment variable template
Copy .env.example to .env:
cp .env.example .envKey variables:
| Variable | Description | Default |
|---|---|---|
INFERENCE_HOST |
Inference service bind address | 127.0.0.1 |
INFERENCE_PORT |
Inference service port | 7780 |
INFERENCE_API_KEY |
API key for inference auth (optional for local dev) | β |
INFERENCE_DEVICE |
PyTorch device (cuda, cpu) |
auto-detect |
INFERENCE_SERVICE_URL |
URL agents use to reach inference | http://localhost:7780 |
BLOCKCHAIN_SERVICE_URL |
Blockchain service (Clanker state, etc.) | http://localhost:8888 |
POLYMARKET_SERVICE_URL |
Polymarket service (orders, redeem, snapshot) | http://localhost:1110 |
TELEGRAM_DEV_AGENT_BOT_TOKEN |
Default Telegram bot token for dev | β |
TELEGRAM_CHAT_ID |
Default Telegram chat ID for dev | β |
For remote inference setup (GPU VPS), see INFERENCE_SERVER_SETUP.md.
- Quick Start Guide β Get running in 5 minutes
- Inference API β Inference service endpoints
- Inference Server Setup β Deploy inference on a GPU VPS
- Docker Agents β Build and run agent containers
- UI Guide β Visual workflow editor
- Contributing β How to contribute
- Security β Security best practices
- Changelog β Version history
This project is licensed under the MIT License β see the LICENSE file for details.
Contributions are welcome! See CONTRIBUTING.md for guidelines.
Built with β€οΈ by The Obelisk
