See how any media lights up the brain.
Neuroscope is an open-source web app that lets you upload any video, audio, or text and visualize predicted brain activity in an interactive 3D viewer — powered by Meta's TRIBE v2 foundation model.
https://neuroscanner.vercel.app
Try it now — no install required:
- Open the link above
- Click "Load Demo Visualization" to see pre-computed brain activity instantly
- Click "Text", type any sentence, and click "Analyze Text" to run real TRIBE v2 inference
- Use "Open / Closed" buttons to split the brain hemispheres like a book
- Drag to rotate, scroll to zoom the 3D brain
First-request caveat: The GPU backend runs on Modal and scales to zero when idle. The very first request after inactivity triggers a cold start (~2-3 minutes) while the container boots and loads ~10 GB of model weights. If your first request fails or times out, simply try again — the second attempt will succeed and subsequent requests will be fast while the container stays warm (5-minute idle window).
- Neuroscience researchers — explore how TRIBE v2 encodes different stimuli without writing code; quickly prototype experiments or validate hypotheses about brain responses to language, audio, or video
- Educators and students — use the interactive 3D brain as a teaching tool to show how different regions activate for different inputs
- AI/ML engineers — a ready-made inference pipeline and UI for Meta's multimodal brain encoding model; fork it, swap in your own model, or extend the API
- Content creators and marketers — visualize how your video, podcast, or copy might engage different brain regions; a novel way to think about content impact
- Accessibility and BCI researchers — a starting point for brain-computer interface prototyping with a real multimodal encoder
- Curious people — type any sentence and watch the brain light up; no background in neuroscience required
- Upload a video, audio file, or type text
- TRIBE v2 predicts fMRI brain activity across ~20,000 cortical vertices
- A 3D brain viewer shows activation patterns in real-time with a timeline scrubber
- A region breakdown panel shows which brain networks are most active
Neuroscope/
├── web/ # Next.js frontend (Three.js 3D brain viewer)
├── api/ # FastAPI backend (TRIBE v2 inference)
└── README.md
- Frontend: Next.js 16, Three.js via @react-three/fiber, TailwindCSS
- Backend: FastAPI serving TRIBE v2 on GPU
- Brain mesh: fsaverage5 cortical surface (~20k vertices, exported from nilearn)
This guide covers setting up Neuroscope from scratch on a fresh Ubuntu VM with an NVIDIA GPU.
| Requirement | Minimum |
|---|---|
| OS | Ubuntu 22.04 or 24.04 LTS |
| GPU | NVIDIA with >= 16 GB VRAM (e.g. A10, A100, RTX 4090) |
| CUDA | Drivers installed and nvidia-smi working |
| RAM | 16 GB system RAM |
| Disk | 30 GB free (model weights are ~10 GB) |
sudo apt update && sudo apt install -y \
git curl wget build-essential \
ffmpeg \
python3 python3-pip python3-venvInstall Node.js 20+ (via NodeSource):
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejsInstall uv (needed for WhisperX audio transcription):
curl -LsSf https://astral.sh/uv/install.sh | sh
# Ensure ~/.local/bin is in your PATH
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc
source ~/.bashrccd ~
git clone <your-repo-url> neuroscope
cd neuroscopecd ~/neuroscope/api
python3 -m venv venv
source venv/bin/activatepip install --upgrade pip
pip install -r requirements.txtChoose the correct command for your CUDA version from https://pytorch.org/get-started/locally/. Example for CUDA 12.x:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121cd ~/neuroscope
git clone https://github.com/facebookresearch/tribev2.git
cd tribev2
pip install -e ".[plotting]"
cd ..These are required by TRIBE v2's text/audio processing pipeline:
pip install gTTS langdetect spacy numpy
python -m spacy download en_core_web_smTRIBE v2 uses LLaMA 3.2-3B (a gated model). You need a HuggingFace token with access granted:
- Go to https://huggingface.co/meta-llama/Llama-3.2-3B and request access
- Create a token at https://huggingface.co/settings/tokens
- Log in on the VM:
pip install huggingface-hub
huggingface-cli login
# Paste your token when promptedOr set the environment variable:
export HF_TOKEN="hf_your_token_here"python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}, Devices: {torch.cuda.device_count()}')"cd ~/neuroscope/web
npm installThe frontend needs web/public/data/brain_mesh.json (fsaverage5 cortical surface). Generate it with:
source ~/neuroscope/api/venv/bin/activate
python3 - << 'MESHSCRIPT'
import json
import numpy as np
from nilearn import datasets, surface
fsaverage = datasets.fetch_surf_fsaverage("fsaverage5")
lh_coords, lh_faces = surface.load_surf_mesh(fsaverage["pial_left"])
rh_coords, rh_faces = surface.load_surf_mesh(fsaverage["pial_right"])
n_left = len(lh_coords)
vertices = np.vstack([lh_coords, rh_coords])
faces = np.vstack([lh_faces, rh_faces + n_left])
mesh = {
"vertices": vertices.flatten().tolist(),
"faces": faces.flatten().tolist(),
"n_vertices": len(vertices),
"n_left": n_left,
}
with open("web/public/data/brain_mesh.json", "w") as f:
json.dump(mesh, f)
print(f"Wrote brain_mesh.json: {len(vertices)} vertices, {len(faces)} faces, n_left={n_left}")
MESHSCRIPTIf nilearn is not installed: pip install nilearn.
This creates mock data so the "Load Demo Visualization" button works without the GPU model:
python3 - << 'DEMOSCRIPT'
import json, random
n_vertices = 20484
n_timesteps = 36
preds = [[round(random.random(), 4) for _ in range(n_vertices)] for _ in range(n_timesteps)]
with open("web/public/data/mock_predictions.json", "w") as f:
json.dump({"predictions": preds, "n_timesteps": n_timesteps, "n_vertices": n_vertices, "fps": 1}, f)
print(f"Wrote mock_predictions.json: {n_timesteps} frames x {n_vertices} vertices")
DEMOSCRIPTcd ~/neuroscope/web
npx next buildThe frontend runs on port 80 and the API on port 8000.
sudo iptables -I INPUT 1 -p tcp --dport 80 -j ACCEPT
sudo iptables -I INPUT 1 -p tcp --dport 8000 -j ACCEPT
# Persist across reboots
sudo apt install -y iptables-persistent
sudo netfilter-persistent saveAlso open TCP ports 80 and 8000 in your cloud provider's security list / firewall rules (e.g. OCI Security Lists, AWS Security Groups, GCP Firewall Rules).
cd ~/neuroscope/api
source venv/bin/activate
export HF_TOKEN="hf_your_token_here"
# Run in background
nohup python server.py > ~/neuroscope/api.log 2>&1 &The first request will take ~60 seconds as the TRIBE v2 model loads into GPU memory. Subsequent requests are fast.
cd ~/neuroscope/web
# Production (port 80, requires sudo)
sudo npx next start -p 80 > ~/neuroscope/web.log 2>&1 &
# Or development mode (port 3000, no sudo)
npm run dev# Backend health check
curl http://localhost:8000/api/health
# Frontend
curl -s -o /dev/null -w '%{http_code}' http://localhost:80Visit http://<your-vm-ip> in a browser.
cd web
npm install
npm run devThe 3D brain viewer and demo visualization work without the backend. Open http://localhost:3000.
cd api
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# ... install torch, tribev2, etc. (see steps 3 above)
python server.pyThe API starts on http://localhost:8000. The frontend proxies /api/* requests to it via next.config.ts rewrites.
| Setting | Location | Description |
|---|---|---|
| API URL | NEXT_PUBLIC_API_URL env var |
Set to Modal URL for cloud deploy; omit for on-prem (defaults to hostname:8000) |
| API proxy | web/next.config.ts |
/api/* → localhost:8000 (dev mode only) |
| GPU device | api/server.py |
Auto-selects CUDA; uses cuda:1 when multiple GPUs present |
| Model cache | api/server.py |
Weights cached in ../cache/ relative to api/ |
| CORS | api/server.py |
Allows all origins by default |
You need a HuggingFace token with LLaMA 3.2-3B access. Run huggingface-cli login or set HF_TOKEN.
The langdetect library misidentifies short English text. The server monkey-patches this to always return "en". Ensure you're running the latest server.py.
- Check that port 8000 is open in both iptables and your cloud provider's security rules.
- For large files, the upload uses XHR with progress. The proxy timeout is 5 minutes.
The correct import path is from tribev2.demo_utils import TribeModel. Ensure tribev2 is installed in editable mode (pip install -e ".[plotting]").
TRIBE v2 needs ~14-16 GB VRAM. Close other GPU processes or use a larger GPU. Check with nvidia-smi.
This application code is licensed under the MIT License.
Note: The TRIBE v2 model weights are licensed under CC-BY-NC-4.0 by Meta Platforms, Inc. See the TRIBE v2 repository for details.
- TRIBE v2 by Meta FAIR
- Brain mesh from FreeSurfer fsaverage5 via nilearn