Demo video:- https://youtu.be/rBkxjN2Escw
Void is a full-stack application designed to automate the evaluation of handwritten answer sheets. It leverages a powerful AI agent pipeline built with crewAI to handle image alignment, handwriting recognition (OCR), answer evaluation, and the generation of insightful student feedback.
- Automated Image Processing: Corrects distortions like skew and perspective in scanned answer sheets using feature matching.
- AI-Powered OCR: Utilizes Azure Document Intelligence to accurately extract handwritten answers from images.
- Intelligent Evaluation: Compares extracted student answers against a provided teacher's key to grade the sheet.
- Insight Generation: An AI agent analyzes the evaluation results to provide qualitative feedback, identify strengths, and suggest areas for improvement.
- Role-Based Access Control: Supports
teacher,student, andparentroles with a secure JWT-based authentication system. - Modern Web Interface: A responsive frontend built with React, Vite, and shadcn/ui for a seamless user experience.
- Scalable Backend: Built with Node.js, Express, and MongoDB for robust data management and API services, with file storage managed by Cloudinary.
The application is composed of three main services that work together:
- Frontend (React): The user-facing application where teachers upload documents and view evaluation results.
- Backend (Node.js): Manages API requests, user authentication, file uploads to Cloudinary, and data persistence in MongoDB. It acts as an orchestrator, calling the AI services when a new submission is made.
- AI Services (Python): A FastAPI server that exposes the
crewAIagent pipeline. This service performs the heavy lifting of image processing, OCR, evaluation, and insight generation.
The workflow is as follows:
- A teacher uploads the question paper (template), their answer key, and the student's answer sheet via the React frontend.
- The frontend sends the files to the Node.js backend.
- The backend uploads the images to Cloudinary and saves the submission details (including image URLs) to MongoDB.
- The backend then sends a request to the Python FastAPI server to begin the evaluation.
- The FastAPI service runs the
crewAIpipeline:- Alignment Agent: Downloads the images and corrects any distortions.
- OCR Agent: Extracts handwritten text from the teacher's key and the student's sheet.
- Evaluation Agent: Compares the student's answers to the key and generates a graded report.
- Insight Agent: Analyzes the report to produce qualitative feedback.
- The backend receives the final evaluation data and stores it in MongoDB.
- The frontend fetches and displays the comprehensive results and insights to the user.
| Frontend | Backend | AI Services |
|---|---|---|
| React | Node.js | Python |
| Vite | Express.js | FastAPI |
| Tailwind CSS | MongoDB | CrewAI |
| shadcn/ui | Mongoose | Google Gemini |
| Axios | JWT (JSON Web Tokens) | Azure Document Intelligence |
| Framer Motion | Cloudinary | OpenCV |
Follow these instructions to set up and run the project locally.
- Node.js (v18 or later)
- Python (v3.9 or later) & pip
- MongoDB instance
- Cloudinary Account
- Google API Key (for Gemini)
- Azure AI Vision Account
The backend server handles API requests, authentication, and orchestration.
# Navigate to the backend directory
cd backend
# Install dependencies
npm install
# Create a .env file and add the following variables:
touch .envbackend/.env
PORT=5000
MONGO_URI=your_mongodb_connection_string
JWT_SECRET=your_super_secret_key
# Cloudinary Credentials
CLOUDINARY_CLOUD_NAME=your_cloudinary_cloud_name
CLOUDINARY_API_KEY=your_cloudinary_api_key
CLOUDINARY_API_SECRET=your_cloudinary_api_secret# Run the backend server
npm run devThe backend will be running at http://localhost:5000.
The AI service contains the FastAPI server and the crewAI agents.
# Navigate to the services directory
cd services
# Create a virtual environment (optional but recommended)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install Python dependencies
pip install -r requirements.txt
# Create a .env file and add your API keys:
touch .envservices/.env
# Google Gemini API Key
GOOGLE_API_KEY=your_google_api_key
# Azure AI Vision Credentials
AZURE_VISION_ENDPOINT=your_azure_vision_endpoint
AZURE_VISION_KEY=your_azure_vision_key# Run the FastAPI server
uvicorn api:app --reloadThe AI services will be available at http://localhost:8000.
The frontend is the user interface for interacting with the application.
# Navigate to the frontend directory
cd frontend
# Install dependencies
npm install
# Run the development server
npm run devThe frontend will be running at http://localhost:5173. You can now access the application in your browser.
- Register/Login: Open
http://localhost:5173and navigate to the/authpage. Create ateacheraccount and log in. - Create an Exam: Go to the Teacher Portal (
/teacher). Enter a name for your exam (e.g., "Physics Midterm") and click "Add Test". - Upload Documents: Once the exam is created, three upload boxes will appear.
- Upload the Student Answer Sheet (image).
- Upload the Question Paper (image to be used as a template for alignment).
- Upload the Answer Key (a completed sheet by the teacher).
- Submit for Evaluation: Click the "Upload Submission" button. The system will process the files and automatically trigger the AI evaluation pipeline.
- View Results: The dashboard will update with evaluation results. You can view a summary, detailed question-by-question analysis, and AI-generated insights. You can also download a PDF report of the evaluation.