Skip to content

Magenta91/adaptive-engine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

2 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿง  Adaptive Diagnostic Engine

A 1-Dimension Adaptive Testing system that dynamically selects GRE-style questions based on a student's real-time performance using Item Response Theory (IRT), then generates a personalized AI-powered study plan via Gemini 1.5 Flash.


๐Ÿš€ Live Demo

Deployed on Render: https://adaptive-engine.onrender.com
Interactive API Docs: https://adaptive-engine.onrender.com/docs


โš™๏ธ Tech Stack

Layer Technology
Backend FastAPI (Python)
Database MongoDB Atlas
Adaptive Algorithm 1PL Item Response Theory
AI Study Plan Google Gemini 1.5 Flash
Deployment Render (free tier)

๐Ÿ“ Project Structure

adaptive-engine/
โ”œโ”€โ”€ app/
โ”‚   โ”œโ”€โ”€ main.py              # FastAPI app + CORS
โ”‚   โ”œโ”€โ”€ routes/
โ”‚   โ”‚   โ””โ”€โ”€ adaptive.py      # All API endpoints
โ”‚   โ”œโ”€โ”€ models/
โ”‚   โ”‚   โ””โ”€โ”€ schemas.py       # Pydantic models
โ”‚   โ”œโ”€โ”€ db/
โ”‚   โ”‚   โ””โ”€โ”€ connection.py    # MongoDB client
โ”‚   โ”œโ”€โ”€ algorithm/
โ”‚   โ”‚   โ””โ”€โ”€ irt.py           # IRT math (pure Python, no side effects)
โ”‚   โ””โ”€โ”€ ai/
โ”‚       โ””โ”€โ”€ study_plan.py    # Gemini 1.5 Flash integration
โ”œโ”€โ”€ seed/
โ”‚   โ””โ”€โ”€ seed_questions.py    # Seeds 25 GRE questions
โ”œโ”€โ”€ .env.example
โ”œโ”€โ”€ requirements.txt
โ””โ”€โ”€ README.md

๐Ÿ› ๏ธ Local Setup

1. Clone the repo

git clone https://github.com/YOUR_USERNAME/adaptive-engine.git
cd adaptive-engine

2. Create a virtual environment

python -m venv venv
source venv/bin/activate        # Windows: venv\Scripts\activate

3. Install dependencies

pip install -r requirements.txt

4. Configure environment variables

cp .env.example .env

Edit .env:

MONGODB_URI=mongodb+srv://user:password@cluster0.xxxxx.mongodb.net/adaptive_engine
GEMINI_API_KEY=AIzaSy...your_key_here

5. Seed the database

python seed/seed_questions.py

6. Run the server

uvicorn app.main:app --reload

Visit http://localhost:8000/docs for the interactive Swagger UI.


๐Ÿ“ก API Documentation

POST /api/start-session

Creates a new student session at baseline ability 0.5.

Response:

{
  "session_id": "uuid-here",
  "message": "Session started. Call GET /api/next-question?session_id=<id> to begin."
}

GET /api/next-question?session_id=<id>

Returns the next adaptive question, selected by IRT to be closest in difficulty to the student's current ability score.

Response:

{
  "session_id": "...",
  "question_number": 1,
  "question_id": "ALG001",
  "text": "If 2x + 4 = 10, what is x?",
  "options": {"A": "2", "B": "3", "C": "4", "D": "5"},
  "topic": "Algebra",
  "difficulty": 0.1
}

POST /api/submit-answer

Submits the student's answer. Updates ability score using IRT.

Request body:

{
  "session_id": "uuid-here",
  "question_id": "ALG001",
  "selected_answer": "B"
}

Response:

{
  "is_correct": true,
  "correct_answer": "B",
  "updated_ability_score": 0.55,
  "questions_answered": 1,
  "is_complete": false,
  "message": "Correct! Next question will be slightly harder."
}

GET /api/study-plan?session_id=<id>

Available after all 10 questions are answered. Calls Gemini 1.5 Flash to generate a personalized 3-step study plan.

Response:

{
  "session_id": "...",
  "ability_score": 0.63,
  "total_questions": 10,
  "correct_count": 6,
  "weak_topics": ["Vocabulary", "Data Analysis"],
  "study_plan": "## Your 3-Step Study Plan\n\n**Step 1: ...**"
}

GET /api/session-status?session_id=<id>

Debug endpoint to inspect full session state including all answers.


๐Ÿงฎ Adaptive Algorithm Logic

The system uses the 1-Parameter Logistic (1PL) IRT model.

Probability of Correct Response

P(correct | ฮธ, b) = 1 / (1 + e^-(ฮธ - b))
  • ฮธ (theta): Student ability score [0.1 โ€“ 1.0], starts at 0.5
  • b: Question difficulty [0.1 โ€“ 1.0]

Ability Score Update

After each answer, ability is updated using gradient ascent on the log-likelihood:

ฮธ_new = ฮธ_old + ฮฑ ร— (actual - P)
  • ฮฑ = 0.1 (learning rate โ€” conservative for stability)
  • actual = 1 if correct, 0 if incorrect

Why This Works

  • When a student answers correctly, actual (1) > P, so ฮธ increases
  • When incorrect, actual (0) < P, so ฮธ decreases
  • The magnitude of change is naturally smaller when the result is "expected" (e.g., a strong student getting an easy question right barely increases their score)
  • This is mathematically superior to naive +0.1/-0.1 jumps

Question Selection

The next question is selected whose difficulty b is closest to the student's current ฮธ. This maximises measurement information per question.


๐Ÿค– AI Log: How AI Tools Were Used

Claude (Anthropic): Used extensively for:

  • Architecting the clean modular project structure
  • Implementing the IRT algorithm with proper mathematical grounding
  • Writing the Gemini prompt to elicit structured, data-driven study plans
  • Generating all 25 GRE-style seed questions with calibrated difficulty scores
  • Writing the comprehensive README

Challenges AI couldn't solve:

  • Calibrating question difficulty scores required domain judgment โ€” AI suggested values that needed manual review to ensure they felt authentically GRE-appropriate
  • MongoDB Atlas network access configuration (IP whitelisting for Render) required manual setup in the Atlas dashboard โ€” AI could explain steps but couldn't perform them

๐ŸŒ Deployment on Render

  1. Push repo to GitHub
  2. Go to render.com โ†’ New Web Service
  3. Connect your GitHub repo
  4. Build Command: pip install -r requirements.txt
  5. Start Command: uvicorn app.main:app --host 0.0.0.0 --port $PORT
  6. Add environment variables: MONGODB_URI, GEMINI_API_KEY
  7. Deploy โ†’ get your *.onrender.com URL

Note: Free tier spins down after 15 min inactivity. First request after idle has a ~30s cold start.

About

Adaptive learning engine with IRT algorithm and AI-powered study plans using Gemin

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages