A powerful AI-powered command-line interface with LangGraph integration for intelligent, conversational development workflows.
- π€ AI Agent Mode - Intelligent task planning and execution using LangGraph
- π¬ Interactive Chat - Conversational AI assistant with memory
- π Secure Authentication - Device flow OAuth with Better-Auth
- π οΈ Tool Calling - File operations, code execution, and web search
- π Session Persistence - Continue conversations across sessions
- β‘ Streaming Responses - Real-time AI output with progress indicators
- π§ Smart Memory - Sliding window + automatic summarization for unlimited context
- Node.js 18+
- npm or yarn
# Clone the repository
git clone https://github.com/your-username/apex-cli.git
cd apex-cli
# Install dependencies
cd server && npm install
cd ../client && npm install- Create a
.envfile in theserverdirectory:
DATABASE_URL="postgresql://user:password@localhost:5432/apex"
SECRET_KEY="your-secret-key"
OPENROUTER_API_KEY="your-openrouter-api-key"
GOOGLE_CLIENT_ID="your-google-client-id"
GOOGLE_CLIENT_SECRET="your-google-client-secret"
BETTER_AUTH_URL="http://localhost:3000"- Set up the database:
cd server
npx prisma generate
npx prisma db pushDevelopment:
# Start the server
cd server && npm run dev
# Start the client (in another terminal)
cd client && npm run devCLI Usage:
# Login
apex login
# Start a chat session
apex chat
# Start an AI agent session
apex agent
# Configure settings
apex configApex-cli/
βββ client/ # Next.js web client
β βββ app/ # App router pages
β βββ components/ # React components
β βββ lib/ # Utilities
βββ server/ # Express server + CLI
β βββ src/
β βββ cli/ # CLI commands
β βββ config/ # Configuration
β βββ lib/
β βββ langgraph/ # LangGraph AI agent
β βββ memory/# Sliding window & summarization
βββ docs/ # Documentation
| Command | Description |
|---|---|
apex login |
Authenticate with your account |
apex logout |
Sign out of current session |
apex chat |
Start interactive AI chat |
apex agent |
Start AI agent with tool calling |
apex config |
View/set configuration |
Apex CLI uses a smart memory system to maintain context across long conversations without hitting token limits.
flowchart LR
A[All Messages] --> B[Conversation Store]
B --> C{Message Count > Threshold?}
C -->|Yes| D[Background Summarization]
C -->|No| E[Keep as-is]
D --> F[Summary + Recent Messages]
E --> G[Sliding Window]
F --> H[LLM Context]
G --> H
The sliding window strategy allows unlimited conversation history while only sending recent messages to the LLM:
- Store All: Every message is persisted to the database
- Retrieve Recent: Only the last N messages (default: 10) are sent to the LLM
- Configurable Window Size: Adjust via
APEX_MEMORY_WINDOW_SIZEenvironment variable
// Example: Load last 10 messages for context
const { messages, summary, totalCount } = await getRecentMessages(sessionId, {
windowSize: 10,
includeSummary: true
});When conversations get long, older messages are automatically summarized to preserve context:
- Threshold Trigger: Summarization runs when unsummarized messages exceed threshold (default: 20)
- Background Processing: Non-blocking summarization doesn't interrupt the chat
- Context Preservation: Key decisions, user preferences, and ongoing tasks are preserved
- Incremental Updates: New summaries build on previous ones
// Summarization happens automatically, or can be triggered manually
await summarizeConversation(sessionId, {
threshold: 20, // Messages before summarization
keepRecent: 10 // Recent messages to keep unsummarized
});Configure memory behavior via environment variables:
| Variable | Default | Description |
|---|---|---|
APEX_MEMORY_WINDOW_SIZE |
10 |
Number of recent messages for LLM context |
APEX_SUMMARIZATION_THRESHOLD |
20 |
Messages before auto-summarization triggers |
APEX_ENABLE_SUMMARIZATION |
true |
Enable/disable automatic summarization |
APEX_USE_DATABASE_MEMORY |
true |
Use PostgreSQL for persistence |
- β Unlimited Conversations - Chat forever without losing context
- β Token Efficient - Only recent messages count toward token limit
- β Context Aware - Summaries preserve important information
- β Persistent - Continue conversations across sessions
- β Non-Blocking - Summarization runs in background
The AI agent uses LangGraph for intelligent task execution:
- Planner - Analyzes tasks and creates execution plans
- Executor - Runs plans using available tools
- Reflector - Evaluates results and iterates
flowchart TD
A[ START] --> B[ Planner Node]
B --> C{Plan Type?}
C -->|Complex Task| D[ Executor Node]
C -->|Simple Query| G[ Direct Response]
D --> E[ Reflector Node]
E --> F{Task Complete?}
F -->|No - Needs More Work| B
F -->|Yes - Success| H[ END]
G --> H
subgraph Tools[" Available Tools"]
T1[readFile]
T2[writeFile]
T3[listDirectory]
T4[executeCode]
T5[searchWeb]
end
D -.-> Tools
subgraph State[" Agent State"]
S1[messages]
S2[plan]
S3[executionResult]
S4[reflection]
end
Workflow Steps:
- User Input β Enters the agent via CLI command (
apex agent) - Planner β Analyzes the request and creates a step-by-step plan
- Executor β Executes each step using available tools
- Reflector β Evaluates results, determines if task is complete
- Loop or Complete β Returns to Planner if more work needed, or outputs final response
readFile- Read file contentswriteFile- Create or update fileslistDirectory- List directory contentsexecuteCode- Run code snippetssearchWeb- Search the internet
- Backend: Node.js, Express, Prisma
- Frontend: Next.js 14, React, Tailwind CSS
- AI: LangGraph, LangChain, OpenRouter
- Auth: Better-Auth with Device Flow
- Database: PostgreSQL
See the /docs directory for detailed documentation:
ISC