A 1-Dimension Adaptive Testing system that dynamically selects GRE-style questions based on a student's real-time performance using Item Response Theory (IRT), then generates a personalized AI-powered study plan via Gemini 1.5 Flash.
Deployed on Render:
https://adaptive-engine.onrender.com
Interactive API Docs:https://adaptive-engine.onrender.com/docs
| Layer | Technology |
|---|---|
| Backend | FastAPI (Python) |
| Database | MongoDB Atlas |
| Adaptive Algorithm | 1PL Item Response Theory |
| AI Study Plan | Google Gemini 1.5 Flash |
| Deployment | Render (free tier) |
adaptive-engine/
โโโ app/
โ โโโ main.py # FastAPI app + CORS
โ โโโ routes/
โ โ โโโ adaptive.py # All API endpoints
โ โโโ models/
โ โ โโโ schemas.py # Pydantic models
โ โโโ db/
โ โ โโโ connection.py # MongoDB client
โ โโโ algorithm/
โ โ โโโ irt.py # IRT math (pure Python, no side effects)
โ โโโ ai/
โ โโโ study_plan.py # Gemini 1.5 Flash integration
โโโ seed/
โ โโโ seed_questions.py # Seeds 25 GRE questions
โโโ .env.example
โโโ requirements.txt
โโโ README.md
git clone https://github.com/YOUR_USERNAME/adaptive-engine.git
cd adaptive-enginepython -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activatepip install -r requirements.txtcp .env.example .envEdit .env:
MONGODB_URI=mongodb+srv://user:password@cluster0.xxxxx.mongodb.net/adaptive_engine
GEMINI_API_KEY=AIzaSy...your_key_herepython seed/seed_questions.pyuvicorn app.main:app --reloadVisit http://localhost:8000/docs for the interactive Swagger UI.
Creates a new student session at baseline ability 0.5.
Response:
{
"session_id": "uuid-here",
"message": "Session started. Call GET /api/next-question?session_id=<id> to begin."
}Returns the next adaptive question, selected by IRT to be closest in difficulty to the student's current ability score.
Response:
{
"session_id": "...",
"question_number": 1,
"question_id": "ALG001",
"text": "If 2x + 4 = 10, what is x?",
"options": {"A": "2", "B": "3", "C": "4", "D": "5"},
"topic": "Algebra",
"difficulty": 0.1
}Submits the student's answer. Updates ability score using IRT.
Request body:
{
"session_id": "uuid-here",
"question_id": "ALG001",
"selected_answer": "B"
}Response:
{
"is_correct": true,
"correct_answer": "B",
"updated_ability_score": 0.55,
"questions_answered": 1,
"is_complete": false,
"message": "Correct! Next question will be slightly harder."
}Available after all 10 questions are answered. Calls Gemini 1.5 Flash to generate a personalized 3-step study plan.
Response:
{
"session_id": "...",
"ability_score": 0.63,
"total_questions": 10,
"correct_count": 6,
"weak_topics": ["Vocabulary", "Data Analysis"],
"study_plan": "## Your 3-Step Study Plan\n\n**Step 1: ...**"
}Debug endpoint to inspect full session state including all answers.
The system uses the 1-Parameter Logistic (1PL) IRT model.
P(correct | ฮธ, b) = 1 / (1 + e^-(ฮธ - b))
- ฮธ (theta): Student ability score
[0.1 โ 1.0], starts at0.5 - b: Question difficulty
[0.1 โ 1.0]
After each answer, ability is updated using gradient ascent on the log-likelihood:
ฮธ_new = ฮธ_old + ฮฑ ร (actual - P)
- ฮฑ =
0.1(learning rate โ conservative for stability) - actual =
1if correct,0if incorrect
- When a student answers correctly,
actual (1) > P, so ฮธ increases - When incorrect,
actual (0) < P, so ฮธ decreases - The magnitude of change is naturally smaller when the result is "expected" (e.g., a strong student getting an easy question right barely increases their score)
- This is mathematically superior to naive +0.1/-0.1 jumps
The next question is selected whose difficulty b is closest to the student's current ฮธ. This maximises measurement information per question.
Claude (Anthropic): Used extensively for:
- Architecting the clean modular project structure
- Implementing the IRT algorithm with proper mathematical grounding
- Writing the Gemini prompt to elicit structured, data-driven study plans
- Generating all 25 GRE-style seed questions with calibrated difficulty scores
- Writing the comprehensive README
Challenges AI couldn't solve:
- Calibrating question difficulty scores required domain judgment โ AI suggested values that needed manual review to ensure they felt authentically GRE-appropriate
- MongoDB Atlas network access configuration (IP whitelisting for Render) required manual setup in the Atlas dashboard โ AI could explain steps but couldn't perform them
- Push repo to GitHub
- Go to render.com โ New Web Service
- Connect your GitHub repo
- Build Command:
pip install -r requirements.txt - Start Command:
uvicorn app.main:app --host 0.0.0.0 --port $PORT - Add environment variables:
MONGODB_URI,GEMINI_API_KEY - Deploy โ get your
*.onrender.comURL
Note: Free tier spins down after 15 min inactivity. First request after idle has a ~30s cold start.