What if you had chosen differently? β A multi-modal life simulation that reads your emotional state via EEG and generates branching alternate-life scenarios with AI-written narrative, generated imagery, and a matching soundtrack.
Built with the CAMEL AI agent framework. Each turn, the system presents 2β3 diverging choices (90% unexpected, 9.9% butterfly-effect, 0.1% ideal ending). Your EEG emotion feeds continuously into the music generation layer, keeping the soundtrack in sync with how you actually feel.
- Narrative engine β CAMEL multi-agent system generates branching life scenarios from your memories. 8β10 turns per session, with memory anchors stored across sessions
- Image generation β Each scene is illustrated in real time (Volcengine / Doubao
seedream-3-0), dreamcore + pixel art aesthetic - EEG music β Emotiv headset β emotion state β Google Lyria generates a matching ambient track per scene (optional; runs without headset using manual emotion input)
- Frontend β React interface with scene display, choice selection, and live image/music progress
Emotiv Headset (optional)
β
βΌ
brain_processor.py βββΆ emotion state
β
βββββββββββββββββΌββββββββββββββββββ
βΌ βΌ βΌ
story_agent.py imgen_tool.py musicgen_tool.py
(CAMEL + GPT) (Doubao API) (Google Lyria)
β β β
βββββββββββββββββ΄ββββββββββββββββββ
β
React frontend
- Agent framework: CAMEL AI
- LLM: GPT-4o mini (memory management)
- Image generation: Volcengine / Doubao
doubao-seedream-3-0-t2i - Music generation: Google Lyria real-time
- EEG (optional): Emotiv Cortex SDK
- Frontend: React + Vite
- Backend: Flask
# Backend
pip install -r camel_agent/requirements.txt
# set OPENAI_API_KEY, ARK_API_KEY, GOOGLE_API_KEY in .env
python camel_agent/main.py
# Frontend
cd frontend && npm install && npm run devEEG headset is optional β the system runs with manual emotion input if no device is connected.