AI-powered meeting productivity platform with real-time transcription and intelligent chat assistance
Built for Qualcomm Hackathon NYU 2025
ScrumAI transforms meeting productivity by combining real-time speech recognition with conversational AI assistance. The platform integrates OpenAI Whisper for precise transcription with AnythingLLM for contextual chat interactions, creating a comprehensive meeting intelligence solution.
- Live Audio Transcription: OpenAI Whisper integration for industry-leading accuracy
- Instant Keyword Extraction: AI-powered identification of key discussion points
- Audio Visualization: Dynamic waveform display and level indicators
- Meeting Timer: Automatic session tracking and duration monitoring
- AnythingLLM Assistant: Conversational AI with meeting context awareness
- RAG-Powered Queries: Ask complex questions about your meeting content
- Live Context Access: Chat assistant has real-time access to transcript data
- Intelligent Query Routing: Automatic detection of complex vs simple queries
- Streaming Responses: Real-time AI response generation
- Document Indexing: Automatic transcript upload for enhanced retrieval
- Notion Integration: Direct export of meeting notes and summaries
- GitHub Integration: Automatic issue creation from action items
- Local Storage: Timestamped transcript files
- Multi-Format Output: Text, JSON, and structured data exports
User Interface (Electron Renderer)
↓
Main Process (Node.js + IPC)
↓
AI Processing Layer (Python)
├── Whisper Engine (Real-time STT)
├── AnythingLLM (RAG System + Chat API)
└── Meeting Analysis (NLP Processing)
↓
Data Storage (Local Files + External APIs)
Frontend: Electron 27.0.0, Vanilla JavaScript, Web Audio API
AI/ML: OpenAI Whisper (ONNX), AnythingLLM, PyTorch, Transformers
Backend: Node.js, Python 3.10+, IPC Communication
- Node.js 18+ (LTS recommended)
- Python 3.10+ with pip
- Git for version control
- Microphone for audio capture
-
Clone the repository
git clone https://github.com/anshul-kumar1/scrumAI.git cd scrumAI -
Install dependencies
npm install npm run setup-python
-
Configure environment variables
cp config.env.example config.env # Edit config.env with your API keys -
Start the application
npm start
npm run dev # Starts with DevTools openCreate a config.env file in the project root:
# Notion API Configuration
NOTION_API_KEY=your_notion_api_key_here
NOTION_PARENT_PAGE_ID=your_notion_parent_page_id_here
# GitHub Configuration
GITHUB_TOKEN=your_github_token_here
GITHUB_OWNER=your_github_username_or_org
GITHUB_REPO=your_repository_nameConfigure AnythingLLM integration in whisper/anythingLLM/config.yaml:
# AnythingLLM Configuration
api_key: "your_anythingllm_api_key"
model_server_base_url: "http://localhost:3001/api"
workspace_slug: "your_workspace_name"
stream: true
stream_timeout: 30Notion Integration
- Create a Notion integration
- Copy the API key to
NOTION_API_KEY - Share a page with your integration and copy the page ID
GitHub Integration
- Create a GitHub personal access token
- Grant
repopermissions for issue creation
AnythingLLM Setup
- Install and run AnythingLLM
- Create a workspace and obtain API key
- Update the config file with your local AnythingLLM instance details
- Launch ScrumAI and grant microphone permissions
- Click the "Start Meeting" button
- Begin speaking - real-time transcription starts immediately
- Monitor live transcript, keywords, and audio visualization
- Use the Chat tab to ask questions about the meeting
Simple Queries (Live Context)
- "What did we just discuss about the authentication system?"
- "Who is taking lead on the API development?"
- "What was the last action item mentioned?"
Complex Analysis (RAG-Powered)
- "Summarize the key decisions made in this meeting"
- "What are the main technical challenges identified?"
- "Create a list of all action items with assigned owners"
- "Analyze the sentiment of the discussion about the Q2 release"
The system automatically determines whether to use live context or RAG based on query complexity.
- Review content using transcript and keywords tabs
- Ask the chat assistant for summaries and insights
- Edit and refine transcript or AI-generated content
- Export to Notion for documentation or GitHub for issue tracking
- Automatic local backup with timestamp
npm run build # Development build
npm run build:win # Windows (NSIS installer)
npm run build:mac # macOS (DMG)
npm run build:linux # Linux (AppImage)scrumAI/
├── src/
│ ├── electron/ # Main Electron process
│ ├── renderer/ # Frontend application
│ │ ├── js/ # JavaScript modules
│ │ │ ├── chat-controller.js # AnythingLLM chat interface
│ │ │ └── audio-manager.js # Web Audio API integration
│ │ └── index.html # Main UI structure
│ └── services/ # Backend services
│ ├── whisperService.js # Whisper transcription service
│ └── chatbotService.js # AnythingLLM chatbot service
├── whisper/ # AI processing layer
│ ├── anythingLLM/ # AnythingLLM integration
│ │ ├── chatbot_client.py # Python AnythingLLM client
│ │ └── config.yaml # LLM configuration
│ ├── meeting_transcriber.py # Enhanced transcriber with LLM
│ └── models/ # ONNX model files
└── config.env # Environment configuration
Audio not working:
- Verify microphone permissions in system settings
- Ensure no other applications are using the microphone
Python AI processing fails:
- Confirm Python 3.10+ installation:
python --version - Reinstall dependencies:
npm run setup-python
AnythingLLM chat not responding:
- Verify AnythingLLM service is running on localhost:3001
- Check API key configuration in
whisper/anythingLLM/config.yaml - Ensure workspace exists and is accessible
Export features not working:
- Validate API keys in
config.env - Test internet connectivity
- Verify service permissions (Notion, GitHub)
npm test # Run test suite
npm run lint # Code quality check- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Qualcomm for the hackathon opportunity
- NYU for hosting the event
- OpenAI Whisper for speech recognition capabilities
- AnythingLLM for conversational AI platform
- Notion and GitHub for integration APIs
Built by the ScrumAI Team for Qualcomm Hackathon NYU 2025