Skip to content

NilkanthSuthar/AxiomCode

Repository files navigation

Axiom Code

Enterprise AI Development Platform

Axiom Code is a production-grade code generation system leveraging multi-agent architecture and LangGraph orchestration. The platform employs specialized AI agents that collaborate to analyze requirements, design architecture, implement code, and validate quality - automating the complete software development lifecycle.


Overview

This system demonstrates advanced AI engineering concepts including:

  • Multi-agent orchestration using LangGraph state machines
  • Real-time WebSocket communication for live progress updates
  • Automated code quality validation and scoring
  • Dynamic model selection from Google's Gemini API
  • Stateful session management with automatic cleanup
  • Type-safe configuration with Pydantic validation

Built as a proof-of-concept for enterprise AI applications, showcasing the integration of modern LLM capabilities with production software engineering practices.

Architecture

Multi-Agent Pipeline

The system implements a four-stage agent pipeline with state management:

  1. Planner Agent - Analyzes natural language requirements and generates comprehensive project specifications including file structure, technology stack, and implementation roadmap.

  2. Architect Agent - Decomposes the project plan into discrete implementation tasks with explicit dependencies and context for each component.

  3. Coder Agent - Executes implementation tasks by generating production-ready code with intelligent retry logic and error recovery mechanisms.

  4. Validator Agent - Performs automated quality assurance including syntax validation, best practice checks, and code quality scoring with improvement recommendations.

Axiom Code Architecture

Technical Stack

Backend:

  • Python 3.11+
  • LangGraph 0.6.3 - State machine orchestration
  • FastAPI 0.116.1 - Async web framework
  • LangChain Google GenAI 2.0.8 - LLM integration
  • Pydantic 2.11.7 - Type validation
  • WebSockets - Real-time communication

Frontend:

  • Vanilla JavaScript with modern ES6+ features
  • WebSocket API for live updates
  • Prism.js for syntax highlighting
  • Local storage for credential management

AI Provider:

  • Google Gemini API (gemini-2.0-flash, gemini-1.5-pro)
  • Dynamic model discovery and selection
  • Automatic rate limiting and quota management

Key Features

Multi-Agent Orchestration

  • State-driven workflow with LangGraph
  • Parallel agent execution where applicable
  • Inter-agent communication and context passing

Code Quality Assurance

  • Automated syntax validation for multiple languages
  • Best practice enforcement
  • Quality scoring algorithm (0-100 scale)
  • Retry mechanism for error correction

Production-Ready Infrastructure

  • RESTful API with OpenAPI documentation
  • WebSocket support for real-time updates
  • Session-based project management
  • Automatic session cleanup (1-hour retention)
  • Structured logging with configurable levels

Enterprise Security

  • Client-side credential storage only
  • No server-side API key persistence
  • Memory-only session credentials
  • HTTPS/TLS encryption ready

Installation

Prerequisites

  • Python 3.11 or higher
  • Google Gemini API key (obtain here)

Setup

  1. Clone the repository:
git clone <repository-url>
cd axiom-code
  1. Install dependencies:
pip install -r requirements.txt
  1. Configure environment (optional):
cp .sample_env .env
# Edit .env with your preferred settings
  1. Start the server:
python -m uvicorn api.server:app --reload --host 0.0.0.0 --port 8001
  1. Access the web interface:
http://localhost:8001

Usage

Web Interface (Recommended)

  1. Navigate to http://localhost:8001
  2. Enter your Google Gemini API key
  3. Select your preferred model (gemini-2.0-flash recommended)
  4. Enable/disable code validation as needed
  5. Describe your project requirements
  6. Monitor real-time agent workflow
  7. Review generated code and download project

CLI Interface

python main.py

Follow the interactive prompts to:

  • Enter project description
  • Select Gemini model
  • Enable validation (optional)
  • View generation progress
  • Access generated files in generated_projects/

Command-line options:

python main.py --prompt "Create a todo app" --output-dir my_project

Example Prompts

Create a REST API with authentication and database integration
Build a responsive dashboard with real-time data visualization
Develop a microservice for data processing and analytics
Create a todo list application using HTML, CSS, and JavaScript
Build a simple calculator web application
Create a blog API with FastAPI and SQLite database

Configuration

Environment Variables

# API Configuration
GEMINI_API_KEY=your_api_key_here

# Model Settings
MODEL_NAME=gemini-2.0-flash
PROVIDER=gemini

# Server Configuration
HOST=0.0.0.0
PORT=8001

# Logging
LOG_LEVEL=INFO
LOG_FILE=axiom_code.log

# Validation
MAX_RETRIES=2
RETRY_ENABLED=true

Supported Models

  • gemini-2.0-flash (recommended - fastest)
  • gemini-2.0-flash-exp
  • gemini-1.5-pro (most capable)
  • gemini-1.5-flash
  • Any Gemini model with generateContent capability

Project Structure

axiom-code/
├── agent/
│   ├── __init__.py
│   ├── config.py          # Pydantic configuration
│   ├── graph.py           # LangGraph workflow definition
│   ├── prompts.py         # Agent system prompts
│   ├── states.py          # State management schemas
│   └── tools.py           # Code validation utilities
├── api/
│   ├── __init__.py
│   └── server.py          # FastAPI application
├── web/
│   ├── index.html         # Frontend application
│   ├── app.js             # Client-side logic
│   └── styles.css         # UI styling
├── main.py                # CLI entry point
├── requirements.txt       # Python dependencies
├── pyproject.toml         # Project metadata
└── README.md

API Documentation

Once the server is running, interactive API documentation is available at:

  • Swagger UI: http://localhost:8001/docs
  • ReDoc: http://localhost:8001/redoc

Key Endpoints

  • POST /api/generate - Start new project generation
  • GET /api/session/{session_id} - Get session status
  • GET /api/session/{session_id}/files - List generated files
  • GET /api/session/{session_id}/download - Download project ZIP
  • POST /api/models - Get available Gemini models
  • WebSocket /ws/{session_id} - Real-time updates

Development

Running Tests

# Install dev dependencies
pip install -r requirements-dev.txt

# Run tests
pytest tests/

Code Quality

# Type checking
mypy agent/ api/

# Linting
ruff check .

# Formatting
black agent/ api/ main.py

Deployment

Docker

docker build -t axiom-code .
docker run -p 8001:8001 -e GEMINI_API_KEY=your_key axiom-code

Cloud Platforms

Compatible with:

  • Railway - One-click deploy, auto-detects Python
  • Render - Free tier available, supports WebSockets
  • Google Cloud Run
  • AWS ECS
  • Azure Container Apps

Note: No environment variables needed - users provide their own API keys through the UI.


Technical Highlights

For Technical Interviews

  1. State Machine Design - LangGraph implementation demonstrates understanding of workflow orchestration and state management patterns.

  2. Async Architecture - FastAPI backend with WebSocket support shows proficiency in asynchronous programming and real-time systems.

  3. Type Safety - Comprehensive Pydantic models throughout the codebase ensure runtime type validation.

  4. Error Handling - Multi-level error recovery with exponential backoff and intelligent retry mechanisms.

  5. API Integration - Dynamic model discovery from Google's API demonstrates third-party service integration skills.

  6. Code Quality - Automated validation system including custom scoring algorithms and best practice checks.


Limitations & Future Work

Current Limitations:

  • No persistent storage (sessions are memory-only)
  • Single-user architecture (no authentication)
  • Limited to text-based code generation
  • No version control integration

Planned Enhancements:

  • Database integration for project persistence
  • User authentication and project history
  • Git integration for version control
  • Support for additional LLM providers (OpenAI, Anthropic)
  • Real-time collaboration features
  • CI/CD pipeline generation

License

This project is licensed under the MIT License. See the LICENSE file for details.


Built with: Python | LangGraph | FastAPI | Google Gemini | WebSockets

Demonstrates: Multi-agent AI Systems | LLM Orchestration | Real-time Web Applications | Production Software Engineering

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors