This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
# Using uv (recommended)
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install -e .
# Using pip
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -e .- Keep Python dependency specifiers exact where this repository already pins them, including test extras and
build-system. - If
pyproject.tomlchanges affect resolved packages, verify the result through the lock/install path used by CI. - Do not introduce floating versions in CI or release automation when exact pins are practical.
# Basic run with stdio transport
python src/codealive_mcp_server.py
# With debug mode enabled
python src/codealive_mcp_server.py --debug
# With SSE transport
python src/codealive_mcp_server.py --transport sse --host 0.0.0.0 --port 8000
# With custom API key and base URL
python src/codealive_mcp_server.py --api-key YOUR_KEY --base-url https://custom.url# Build Docker image
docker build -t codealive-mcp .
# Run with Docker
docker run --rm -i -e CODEALIVE_API_KEY=your_key_here codealive-mcpAfter making local changes, quickly verify everything works:
# Using make (recommended)
make smoke-test
# Or directly
python smoke_test.py
# With valid API key for full testing
CODEALIVE_API_KEY=your_key python smoke_test.pyThe smoke test:
- ✓ Verifies server starts and connects via stdio
- ✓ Checks all tools are registered correctly
- ✓ Tests each tool responds appropriately
- ✓ Validates parameter handling
- ✓ Runs in ~5 seconds
Run comprehensive unit tests with pytest:
# Using make
make unit-test
# Or directly
pytest src/tests/ -v
# With coverage
pytest src/tests/ -v --cov=srcRun both smoke tests and unit tests:
make testThis is a Model Context Protocol (MCP) server that provides AI clients with access to CodeAlive's semantic code search and analysis capabilities.
codealive_mcp_server.py: Main server implementation using FastMCP framework- Three main tools:
codebase_consultant,codebase_search,get_data_sources - CodeAliveContext: Manages HTTP client and API credentials
- Async lifespan management: Handles client setup/teardown
- FastMCP Framework: Uses modern async Python MCP implementation with lifespan context management
- HTTP Client Management: Single persistent httpx.AsyncClient with proper connection pooling
- Streaming Support: Implements streaming chat completions with proper chunk parsing
- Environment Configuration: Supports both .env files and command-line arguments with precedence
- Error Handling: Comprehensive HTTP status code handling with user-friendly error messages
- N8N Middleware: Strips extra parameters (sessionId, action, chatInput, toolCallId) from n8n tool calls before validation
- AI client connects to MCP server via stdio/SSE transport
- Client calls tools (
get_data_sources→codebase_search→codebase_consultant) - MCP server translates tool calls to CodeAlive API requests
- CodeAlive API returns semantic search results or chat completions
- Server formats and returns results to AI client
CODEALIVE_API_KEY: Required API key for CodeAlive serviceCODEALIVE_BASE_URL: API base URL (defaults to https://app.codealive.ai)CODEALIVE_IGNORE_SSL: Set to disable SSL verification (debug mode)
- Repository: Individual code repositories with URL and repository ID
- Workspace: Collections of repositories accessible via workspace ID
- Tool calls can target specific repositories or entire workspaces for broader context
The server is designed to integrate with:
- Claude Desktop/Code (via settings.json configuration)
- Cursor (via MCP settings panel)
- VS Code with GitHub Copilot (via settings.json)
- Continue (via config.yaml)
- n8n (via AI Agent node with MCP tools)
- Any MCP-compatible AI client
Key integration considerations:
- AI clients should use
get_data_sourcesfirst to discover available repositories/workspaces, then use those IDs for targeted search and chat operations - n8n Integration: The server includes middleware to automatically strip n8n's extra parameters (sessionId, action, chatInput, toolCallId) from tool calls, so n8n works out of the box without any special configuration
When making significant changes, consider incrementing the version in pyproject.toml:
version = "0.3.0" # Increment for new features, bug fixes, or breaking changesThe project uses automated publishing:
- Trigger: Push version change to
mainbranch - Process: Tests → Build → Docker → MCP Registry → GitHub Release
- Result: Available at
io.github.codealive-ai/codealive-mcpin MCP Registry
- Patch (0.2.0 → 0.2.1): Bug fixes, minor improvements
- Minor (0.2.0 → 0.3.0): New features, enhancements
- Major (0.2.0 → 1.0.0): Breaking changes, major releases
When implementing features or fixes, evaluate if they warrant a version bump for users to benefit from the changes through the MCP Registry.