A real-time voice transcription and intelligent context-aware assistant built with MentraOS that captures audio, processes it through intelligent routing, and provides personalized responses using multiple AI tools, persistent memory, and subscription-based features.
Clairvoyant is an advanced MentraOS application that provides real-time voice transcription with intelligent routing to specialized AI capabilities. It listens for voice activity, transcribes speech using Groq's Whisper, routes queries to appropriate tools (weather, search, maps, memory, etc.), and provides concise, contextual responses through AI-powered formatting and persistent memory integration. The application features a freemium model with Pro subscriptions powered by Polar, enabling advanced features like web search and personalized memory-enhanced responses.
The application integrates multiple layers across a monorepo structure:
- MentraOS Framework: Provides audio streaming, voice activity detection, and UI components
- Intelligent Routing: BAML-powered routing system that directs queries to appropriate handlers (with two-phase ambient speech detection)
- Specialized Tools: External API integrations (weather, search, maps, memory, chat)
- Handler Orchestration: UX flow management with loading states and response formatting
- AI Formatting & Interpretation: BAML prompts that convert tool outputs to concise, readable responses and interpret user interactions
- Dual-Memory Architecture: Multi-layer persistent memory with session context, daily summaries, and peer facts (Honcho)
- Interactive Chat: Full-page ChatPage with memory-enhanced responses powered by Groq
- Session Management: Automatic session summarization and daily synthesis (cron-based)
- Subscription Management: Polar-powered payment system for Pro features
- Data Persistence: Convex backend for user preferences, tool usage analytics, session summaries, and subscription state
Clairvoyant is organized as a monorepo using Turbo with the following structure:
clairvoyant/
├── apps/
│ ├── application/ # Main MentraOS application
│ │ └── src/
│ │ ├── index.ts # Application entry point
│ │ ├── transcriptionFlow.ts # Central routing handler
│ │ ├── baml_client/ # Generated BAML client
│ │ ├── core/ # Core utilities
│ │ │ ├── env.ts # Environment validation
│ │ │ ├── convex.ts # Convex client & helpers
│ │ │ ├── textWall.ts # UI text display helpers
│ │ │ ├── rateLimiting.ts # API rate limiting
│ │ │ └── utils.ts # Utility functions
│ │ ├── handlers/ # Flow orchestrators
│ │ │ ├── weather.ts # Weather handler (with memory)
│ │ │ ├── search.ts # Web search handler (dual-layer memory)
│ │ │ ├── maps.ts # Maps handler (with memory)
│ │ │ ├── knowledge.ts # Knowledge handler (with memory)
│ │ │ ├── memory.ts # Memory capture/recall
│ │ │ ├── chat.ts # Chat handler (with session context)
│ │ │ └── hints.ts # Proactive hint generation (PASSTHROUGH)
│ │ ├── tools/ # External API integrations
│ │ │ ├── weatherCall.ts # OpenWeatherMap integration
│ │ │ ├── webSearch.ts # Tavily web search
│ │ │ ├── mapsCall.ts # Google Places API
│ │ │ └── memoryCall.ts # Honcho memory initialization
│ │ └── types/ # TypeScript definitions
│ │ └── schema.ts # Zod validation schemas
│ │
│ ├── api/ # Elysia API server
│ │ └── src/
│ │ ├── index.ts # API server entry point
│ │ ├── session.ts # Session management routes
│ │ ├── env.ts # API environment config
│ │ └── middleware/
│ │ ├── auth.ts # Authentication middleware
│ │ └── mentra.ts # MentraOS integration
│ │
│ └── web/ # React web dashboard
│ └── src/
│ ├── App.tsx # Main app component
│ ├── components/
│ │ ├── HomePage.tsx # Dashboard home
│ │ ├── SettingsPage.tsx # User settings
│ │ ├── SubscriptionCard.tsx # Subscription management
│ │ ├── ToolUsageChart.tsx # Analytics visualization
│ │ └── ui/ # UI components
│ ├── hooks/
│ │ └── useConvexAuth.ts # Convex authentication
│ └── lib/
│ ├── api.ts # API client
│ └── utils.ts # Utilities
│
├── packages/
│ ├── convex/ # Convex backend functions
│ │ ├── schema.ts # Database schema
│ │ ├── users.ts # User management
│ │ ├── preferences.ts # User preferences
│ │ ├── toolInvocations.ts # Tool usage analytics
│ │ ├── sessionSummaries.ts # Session summary storage & queries
│ │ ├── dailySummaries.ts # Daily synthesis & management
│ │ ├── chatMessages.ts # Chat message persistence
│ │ ├── polar.ts # Polar subscription integration
│ │ ├── auth.config.ts # Authentication config
│ │ └── cronManagement.ts # Cron operation control
│ │
│ └── @clairvoyant/baml-client/ # Auto-generated BAML client
│ └── Exposes all BAML functions (b.*) to application handlers
│
├── baml_src/ # BAML prompt definitions
│ ├── route.baml # Routing logic with PASSTHROUGH for ambient speech
│ ├── weather.baml # Weather formatting
│ ├── search.baml # Search formatting
│ ├── maps.baml # Maps formatting
│ ├── recall.baml # Memory recall formatting
│ ├── answer.baml # Knowledge formatting
│ ├── chat.baml # Chat interpretation and responses
│ ├── session_summary.baml # Session transcript summarization
│ ├── synthesis.baml # Daily synthesis from session summaries
│ ├── hints.baml # Proactive hint generation (two-phase)
│ ├── core.baml # Core utilities (EnhanceQuery)
│ ├── clients.baml # AI client configurations
│ └── generators.baml # Code generation settings
│
└── docs/
└── MEMORY_INJECTION_PATTERN.md # Memory integration guide
sequenceDiagram
participant User
participant MentraOS
participant Application
participant Router
participant Handler
participant Tool
participant BAML
participant Memory
participant Convex
participant Polar
participant ExtAPI as External APIs
User->>MentraOS: Starts speaking
MentraOS->>Application: onVoiceActivity + onAudioChunk
MentraOS->>Application: onSession(sessionId)
Application->>Memory: Initialize Honcho session with sessionId
User->>MentraOS: Stops speaking
MentraOS->>Application: onVoiceActivity(false)
Application->>Application: Process audio & transcribe via Groq Whisper
Application->>Router: Route transcription via BAML
Router-->>Application: Routing decision (WEATHER/SEARCH/MAPS/CHAT/MEMORY_RECALL/PASSTHROUGH/etc)
alt Chat Route (Pro Only)
Application->>Handler: startChatFlow()
Handler->>Convex: Check if user is Pro
alt Pro User
Handler->>MentraOS: Show loading state
Handler->>Memory: Get session context
Memory-->>Handler: Working memory from current session
Handler->>BAML: InterpretChatMessage(message)
BAML-->>Handler: Extracted facts, topics, summary updates
Handler->>Handler: Groq chat API with memory context
Handler-->>Handler: Chat response
Handler->>Convex: Store chat message
Handler->>BAML: Update session summary
Handler->>MentraOS: Display chat response
else Free User
Handler->>MentraOS: "Chat is a Pro feature"
end
else Specialized Route (e.g., WEATHER)
Application->>Handler: startWeatherFlow()
Handler->>Convex: Check user preferences
Convex-->>Handler: Weather unit preference
Handler->>MentraOS: Show loading state
Handler->>MentraOS: Subscribe to location
MentraOS-->>Handler: Location data
Handler->>Tool: getWeatherData(lat, lon, unit)
Tool->>ExtAPI: OpenWeatherMap API
ExtAPI-->>Tool: Weather data
Handler->>Convex: Check if user is Pro
Convex->>Polar: Verify subscription
Polar-->>Convex: Subscription status
alt Pro User (Memory Enabled)
Handler->>Memory: getContext() for personalization
Memory-->>Handler: User context (name, facts, preferences)
Handler->>BAML: SummarizeWeatherFormatted(data, memory)
else Free User
Handler->>BAML: SummarizeWeatherFormatted(data)
end
BAML-->>Handler: Short readable lines
Handler->>MentraOS: Display formatted response
Handler->>Convex: Record tool invocation
else PASSTHROUGH Route
Application->>BAML: Phase 1: ClassifyForHint(text)
BAML-->>Application: Hintable or Ambient classification
alt Hintable
Application->>Memory: Phase 2: Query for relevant context
Memory-->>Application: User facts/context if available
Application->>BAML: GenerateHint(topic, memory)
BAML-->>Application: Hint (if warranted)
Application->>MentraOS: Optionally display hint
end
else Memory Recall Route
Application->>Memory: MemoryRecall(query)
Memory->>Memory: Query Honcho peer
Memory->>BAML: Format retrieved context
Memory->>MentraOS: Display personalized response
else Default Route
Application->>Memory: Capture transcription in session
Memory->>Memory: Store in Honcho session buffer
end
User->>MentraOS: Session ends
MentraOS->>Application: onSessionEnd(sessionId)
Application->>BAML: SummarizeSession(transcript)
BAML-->>Application: Session summary
Application->>Convex: Store sessionSummary
alt Pro User
Convex->>Convex: Trigger daily synthesis if needed
Convex->>BAML: SynthesizeDay(sessionSummaries, memory)
BAML-->>Convex: Daily summary
Convex->>Convex: Update dailySummaries
end
| Feature | Free | Pro |
|---|---|---|
| Weather | ✅ | ✅ |
| Knowledge (Q&A) | ✅ | ✅ |
| Maps & Nearby Places | ❌ | ✅ |
| Web Search | ❌ | ✅ |
| Memory Capture & Recall | ❌ | ✅ |
| Session Notes via Email | ❌ | ✅ |
| Proactive Hints | ❌ | ✅ |
| Daily Summaries | ❌ | ✅ |
| Interactive Chat with Memory | ❌ | ✅ |
| Personalized Responses | ❌ | ✅ |
- BAML-powered routing: Automatically classifies queries into categories (weather, search, maps, memory, knowledge, chat, passthrough)
- Two-phase classification: Distinguishes user-directed speech from ambient content (TV, podcasts, background speech)
- Context-aware routing: Routes questions about user's personal information to memory system
- Extensible routing: Easy to add new categories and handlers with priority-based disambiguation
- Route categories:
WEATHER: Current/upcoming weather for specific locationsWEB_SEARCH: News, current events, time-sensitive information (Pro only)MAPS: Nearby businesses, addresses, directions (Pro only)KNOWLEDGE: General factual informationMEMORY_RECALL: Personal information, preferences, history (Pro only)MEMORY_CAPTURE: Commands to store new personal facts or reminders (Pro only)CHAT: Interactive conversation with memory context (Pro only)PASSTHROUGH: Ambient speech gating (no action, optional hint generation)
- Weather Tool: OpenWeatherMap integration with location services and user preference support
- Web Search Tool: Tavily-powered real-time web search (Pro only)
- Maps Tool: Google Places API for location queries (Pro only)
- Memory Tool: Honcho-powered persistent context and personalization (Pro only)
- Knowledge Tool: General knowledge questions via AI
- Async flow orchestration: Non-blocking handlers with stale request protection
- Loading state management: User-friendly loading/success/error states
- Location integration: Automatic location requests where needed
- Timeout handling: Graceful fallbacks for slow/failed operations
- Pro feature gating: Automatic subscription checks for premium features
- BAML prompt engineering: Converts raw tool outputs to concise, readable responses
- Token-efficient prompts: Minimal data transfer with maximum information density
- Consistent response format: ≤3 lines, ≤10 words per line for optimal readability
- Contextual formatting: Responses tailored to user's query and personality
- Memory-enhanced responses: Personalized insights based on user context (Pro feature)
-
Within-Session Working Memory (Honcho)
- Uses stable Mentra sessionId for consistent session tracking
- Buffers transcripts during active sessions
- Automatically summarized on session stop
- Fast context retrieval for real-time interactions
-
Cross-Session Peer Facts (Honcho)
- Persistent peer representation and biographical cards
- Deductive conclusions about user preferences
- Multi-peer support for family/group conversations
- Enables "who is this person" understanding
-
Session Index & Daily Summaries (Convex)
- Structured session summaries stored in Convex
- Daily synthesis via cron (3am UTC)
- Enables "what did we discuss last week" queries
- Pro-only feature for LLM operations
- Web dashboard for memory browsing
- Honcho integration: Persistent memory across sessions with session-aware context
- Dual-memory system: Combines real-time session context with persistent daily summaries
- Peer-based conversations: Dedicated "diatribe" peer for raw transcription storage
- Session boundaries: Automatic session detection and summarization
- Memory-aware responses: Leverages stored context for personalized interactions (Pro feature)
Clairvoyant implements a sophisticated memory injection system that enhances tool responses with personalized context. See docs/MEMORY_INJECTION_PATTERN.md for complete documentation.
Two Implementation Layers:
- Single-Layer (Post-Fetch): Memory enhances response formatting only (Weather, Knowledge, Maps)
- Dual-Layer (Pre + Post-Fetch): Memory enhances both query generation AND response formatting (Web Search)
Memory Context Structure:
- peerCard: Biographical facts (name, age, location, family, etc.)
- peerRepresentation: Explicit facts and deductive conclusions about user preferences
- Temporal context: Recent queries with timestamps for "what" and "when" awareness
Example Memory-Enhanced Response:
- Without memory: "Today's vibe: 9/10, sunny with a breeze feels nice!"
- With memory: "Today's vibe: 9/10, Ajay - perfect for your morning run!" (uses biographical fact)
- With deductive memory: "Today's vibe: 9/10, Ajay - chilly but you like cold weather!" (uses deductive conclusion)
- Full-page ChatPage: Mobile-optimized chat component
- Memory-enhanced responses: Powered by Groq (gpt-oss-120b) with memory context injection
- Session-aware chat: Access to current session context for contextual conversations
- Persistent messages: Chat history stored in Convex with user/date indexing
- Auto-resynthesis: Daily summaries automatically updated when chat sessions end
- Note This feature: Email session notes directly from chat interface (Pro feature)
- Automatic analysis: BAML-powered
InterpretChatMessagefor structured understanding - Fact extraction: Automatically identifies and stores new user facts from chat
- Topic tracking: Detects conversation topics for summary updates
- Summary updates: Intelligently decides when to update daily summaries
- Automatic boundaries: Detects session start/stop via MentraOS events
- Session summaries: Groq-powered summarization of session transcripts
- Quality filtering: Ignores incomplete sentences, filler words, and transcription errors
- Temporal context: Timestamps and context window tracking
- Two-phase classification: Phase 1 gates ambient speech, Phase 2 queries memory for relevant knowledge
- Smart triggering: Only surfaces hints when relevant knowledge exists
- Non-intrusive: Minimal LLM costs via early filtering and selective generation
- Context-aware: Hints are personalized based on user facts and recent history
- Subscription management: Powered by Polar for payment processing
- Pro tier: Unlocks advanced features including:
- Web search functionality
- Interactive chat with memory context
- Memory-enhanced personalized responses
- Daily summary synthesis and browsing
- Advanced context awareness
- Session note export
- Free tier: Basic features including weather, maps, knowledge, and memory capture/recall
- Subscription state: Stored in Convex and checked before Pro feature access
- User attempts to use Pro feature (e.g., web search)
- Application checks subscription status via Convex
- Convex queries Polar for current subscription
- If Pro: Feature enabled with full memory personalization
- If Free: User sees upgrade prompt
- Usage tracking: All tool invocations recorded in Convex
- Daily aggregation: Counts per tool per day
- Analytics dashboard: Web app displays usage charts and trends
- Data structure:
toolInvocationstable tracksuserId,router,count,date
- Weather unit: Celsius or Fahrenheit preference
- Default location: Optional default location for weather queries
- Preference storage: Persisted in Convex
preferencestable - Settings UI: Web dashboard for preference management
- Text wall displays: Clean, timed text overlays in MentraOS interface
- View management: Automatic return to main view after responses
- Duration optimization: 3-second display timing for optimal readability
- Error state handling: Clear error messaging with automatic recovery
- Web dashboard: React-based admin interface for settings and analytics
- Bun runtime (v1.3.3+)
- Convex account and project
- Polar account for subscriptions
- ngrok for tunneling (development)
- API keys for: MentraOS, Groq, OpenAI, OpenWeatherMap, Tavily, Google Maps, Honcho
- Install dependencies:
bun install- Set up Convex:
npx convex devThis will create a .env.local file with your Convex deployment URL.
-
Set up Polar:
- Create a Polar account and organization
- Create a product with a subscription plan
- Get your organization token
- Configure Polar in Convex dashboard
-
Create environment files:
Root .env.local:
# MentraOS
PACKAGE_NAME=your-package-name
MENTRAOS_API_KEY=your-mentraos-api-key
# AI Services
GROQ_API_KEY=your-groq-api-key
OPENAI_API_KEY=your-openai-api-key
# External APIs
OPENWEATHERMAP_API_KEY=your-weather-api-key
TAVILY_API_KEY=your-tavily-api-key
GOOGLE_MAPS_API_KEY=your-google-maps-api-key
# Memory
HONCHO_API_KEY=your-honcho-api-key
# Convex (auto-generated by `npx convex dev`)
CONVEX_URL=your-convex-deployment-url
# Polar
POLAR_ORGANIZATION_TOKEN=your-polar-org-token
# API Server
API_PORT=3001
AUTH_PUBLIC_KEY_PEM=your-auth-public-key
AUTH_KEY_ID=your-auth-key-idApplication .env.local (optional overrides):
PORT=3000- Generate BAML client:
npx baml-cli generate- Sync Polar products to Convex:
# Via Convex dashboard or API
# Products are synced automatically when configured- Start Convex backend:
bun run database
# or
npx convex dev- Start the application:
bun run app:dev
# or
cd apps/application && bun run dev- Start the API server (optional):
bun run api:dev
# or
cd apps/api && bun run dev- Start the web dashboard (optional):
bun run web:dev
# or
cd apps/web && bun run dev- Create a tunnel to expose your local server:
ngrok http --url=your-ngrok-url.ngrok-free.app 3000# Build all apps
bun run build
# Build specific app
bun run web:buildClairvoyant can be deployed to Railway (recommended for long-running services) or Vercel (recommended for web app and API serverless functions).
| Service | Recommended Platform | Alternative Platform | Notes |
|---|---|---|---|
| Application | Railway | - | Long-running MentraOS service |
| API | Railway | Vercel | Requires code changes for Vercel |
| Web | Vercel | Railway | Static site, works on both |
Railway is ideal for deploying the application and API services as they are long-running processes. The web app can also be deployed to Railway as a static site.
- Railway account
- Railway CLI installed:
npm i -g @railway/cli - All environment variables configured
-
Create a new Railway project:
railway login railway init
-
Link to existing service (if using railway.json):
railway link
-
Configure the service:
- Set the Root Directory to the project root
- The
railway.application.jsonfile will be automatically detected - Or manually configure:
- Build Command:
bun install - Start Command:
bun run --cwd apps/application start - Watch Patterns:
apps/application/**,packages/convex/**,baml_src/**,package.json,bun.lock
- Build Command:
-
Set environment variables:
railway variables set PACKAGE_NAME=your-package-name railway variables set MENTRAOS_API_KEY=your-mentraos-api-key railway variables set GROQ_API_KEY=your-groq-api-key railway variables set OPENAI_API_KEY=your-openai-api-key railway variables set OPENWEATHERMAP_API_KEY=your-weather-api-key railway variables set TAVILY_API_KEY=your-tavily-api-key railway variables set GOOGLE_MAPS_API_KEY=your-google-maps-api-key railway variables set HONCHO_API_KEY=your-honcho-api-key railway variables set CONVEX_URL=your-convex-deployment-url railway variables set PORT=3000
-
Deploy:
railway up
-
Get public domain:
railway domain
Copy the generated domain and configure it in your MentraOS package settings.
-
Create a new service in your Railway project:
railway service create api
-
Link the service:
railway link --service api
-
Configure the service:
- The
railway.api.jsonfile will be automatically detected - Or manually configure:
- Build Command:
bun install - Start Command:
bun run --cwd apps/api start - Watch Patterns:
apps/api/**,packages/convex/**,package.json,bun.lock
- Build Command:
- The
-
Set environment variables:
railway variables set MENTRAOS_API_KEY=your-mentraos-api-key railway variables set CONVEX_URL=your-convex-deployment-url railway variables set AUTH_PUBLIC_KEY_PEM=your-auth-public-key railway variables set AUTH_PRIVATE_KEY_PEM=your-auth-private-key railway variables set AUTH_KEY_ID=your-auth-key-id railway variables set API_PORT=3001 railway variables set RAILWAY_PUBLIC_DOMAIN=$(railway domain)
-
Deploy:
railway up
-
Create a new service:
railway service create web railway link --service web
-
Configure the service:
- Root Directory: Leave as project root (monorepo)
- Build Command:
bun install && bun run --cwd apps/web build - Start Command:
bun run --cwd apps/web preview --port $PORT --host - Output Directory:
apps/web/dist - Watch Patterns:
apps/web/**,packages/convex/**,package.json,bun.lock
-
Set environment variables:
railway variables set VITE_CONVEX_URL=your-convex-deployment-url railway variables set VITE_API_BASE_URL=https://your-api-service.railway.app
-
Deploy:
railway up
Alternative: Railway also supports static site hosting. You can configure it to serve the apps/web/dist directory directly without a preview server.
The project includes Railway configuration files:
railway.application.json: Application service configurationrailway.api.json: API service configuration
These files define build commands, start commands, and watch patterns for automatic deployments.
Vercel is ideal for deploying the web app (static site) and API (serverless functions). The application service should be deployed to Railway as it requires a long-running process.
- Vercel account
- Vercel CLI installed:
npm i -g vercel - All environment variables configured
-
Navigate to web app directory:
cd apps/web -
Deploy to Vercel:
vercel
-
Configure build settings:
- Framework Preset: Vite
- Build Command:
cd ../.. && bun install && bun run --cwd apps/web build - Output Directory:
apps/web/dist - Install Command:
cd ../.. && bun install
-
Set environment variables in Vercel dashboard:
VITE_CONVEX_URL: Your Convex deployment URLVITE_API_BASE_URL: Your API base URL (Railway or Vercel)
-
For production deployment:
vercel --prod
Note: The API service uses Elysia with Bun runtime and is optimized for long-running processes. For best compatibility, we recommend deploying the API to Railway. However, Vercel deployment is possible with modifications.
Option 1: Railway (Recommended) Follow the Railway API deployment instructions above for the best experience.
Option 2: Vercel (Requires Code Changes)
To deploy to Vercel, you'll need to modify the API to export a serverless handler:
-
Create a Vercel handler file (
apps/api/src/vercel.ts):import { app } from "./index"; export default app.handle;
-
Create
vercel.jsonin project root:{ "version": 2, "builds": [ { "src": "apps/api/src/vercel.ts", "use": "@vercel/node" } ], "routes": [ { "src": "/(.*)", "dest": "apps/api/src/vercel.ts" } ] } -
Deploy from project root:
vercel
-
Set environment variables:
vercel env add MENTRAOS_API_KEY vercel env add CONVEX_URL vercel env add AUTH_PUBLIC_KEY_PEM vercel env add AUTH_PRIVATE_KEY_PEM vercel env add AUTH_KEY_ID vercel env add PUBLIC_BASE_URL
-
Note: The API code already includes Vercel origin detection for CORS. The
PUBLIC_BASE_URLwill be automatically set fromVERCEL_URLif not explicitly provided. Remove the.listen()call in production when using Vercel. -
For production:
vercel --prod
Important: Since the API uses Bun-specific features, you may need to use a Node.js-compatible runtime or consider Railway for better Bun support.
If deploying from the monorepo root, configure Vercel to recognize the workspace structure:
-
Create
vercel.jsonin project root (for web app):{ "buildCommand": "cd apps/web && bun install && bun run build", "outputDirectory": "apps/web/dist", "installCommand": "bun install", "framework": "vite" } -
Or use Vercel dashboard:
- Set Root Directory to
apps/web - Configure build settings as above
- Set Root Directory to
Recommended Setup:
- Application: Railway (long-running MentraOS service)
- API: Railway or Vercel (serverless functions)
- Web: Vercel (static site with CDN)
Alternative Setup:
- All services: Railway (simpler to manage, single platform)
Ensure all required environment variables are set in your deployment platform:
Application Service:
PACKAGE_NAMEPORTMENTRAOS_API_KEYGROQ_API_KEYOPENAI_API_KEYOPENWEATHERMAP_API_KEYTAVILY_API_KEYGOOGLE_MAPS_API_KEYHONCHO_API_KEYCONVEX_URL
API Service:
API_PORT(orPORT)MENTRAOS_API_KEYCONVEX_URLAUTH_PUBLIC_KEY_PEMAUTH_PRIVATE_KEY_PEMAUTH_KEY_IDPUBLIC_BASE_URL(orRAILWAY_PUBLIC_DOMAINfor Railway)ALLOWED_ORIGINS(optional, comma-separated)
Web App:
VITE_CONVEX_URLVITE_API_BASE_URL
-
Update MentraOS package settings:
- Set the application service URL (Railway domain)
- Configure webhook endpoints if needed
-
Update Convex environment:
- Ensure
CONVEX_URLmatches your deployment - Verify Polar integration is configured
- Ensure
-
Test endpoints:
- Application: Health check via MentraOS
- API: Test
/healthor session endpoints - Web: Verify dashboard loads and connects to Convex
-
Monitor logs:
- Railway:
railway logs - Vercel: Dashboard or
vercel logs
- Railway:
Clairvoyant implements a sophisticated memory injection system that enables personalized, context-aware responses. This pattern is documented in detail in docs/MEMORY_INJECTION_PATTERN.md.
When to Use:
- Single-Layer Pattern: For tools that fetch data first, then personalize the response (Weather, Knowledge, Maps)
- Dual-Layer Pattern: For tools that benefit from query enhancement before fetching (Web Search)
Implementation Steps:
- Add
memorySessionandpeersparameters to handler - Fetch memory context after data retrieval (single-layer) or before (dual-layer)
- Extract user name, facts, and deductive conclusions
- Pass memory context to BAML formatter
- Update BAML function to accept optional
MemoryContextLiteparameter - Regenerate BAML client
Example (Weather Handler):
// Fetch memory context after weather data
let memoryContext = null;
if (memorySession && peers && isPro) {
const contextData = await memorySession.getContext({
peerTarget: diatribePeer.id,
lastUserMessage: "weather",
});
// Extract userName, userFacts, deductiveFacts
memoryContext = { userName, userFacts, deductiveFacts };
}
// Pass to BAML formatter
const result = await b.SummarizeWeatherFormatted(weatherData, memoryContext);Key Principles:
- Graceful degradation: Works without memory
- Pro-only feature: Memory injection requires active subscription
- Tool-specific filtering: Only relevant facts are used
- Temporal awareness: Recent queries included with timestamps
- Subtle integration: LLM weaves memories naturally
-
Create Polar Organization:
- Sign up at polar.sh
- Create an organization
- Get your organization token
-
Create Products:
- Create a "Pro" subscription product
- Set pricing (monthly/yearly)
- Configure webhook endpoints (optional)
-
Configure Convex:
- Add
POLAR_ORGANIZATION_TOKENto Convex environment variables - Run
syncProductsFromPolaraction to sync products
- Add
-
User Flow:
- Users sign up via web dashboard
- Checkout via Polar-generated links
- Subscription status checked before Pro features
import { checkUserIsPro } from "./core/convex";
const isPro = await checkUserIsPro(mentraUserId);
if (!isPro) {
// Show upgrade prompt or disable feature
}The application uses an auto-generated BAML client package (@clairvoyant/baml-client) that centralizes all BAML function exports. This package is generated from baml_src/ definitions.
After modifying any .baml files, regenerate the client:
npx baml-cli generateThis creates/updates the packages/@clairvoyant/baml-client directory with TypeScript bindings for all BAML functions, exposed as b.* in application code.
import { b } from "../baml_client";
// Chat interpretation
const interpretation = await b.InterpretChatMessage(userMessage);
// Session summarization
const summary = await b.SummarizeSession(transcript);
// Daily synthesis
const dailySummary = await b.SynthesizeDay(sessionSummaries, memoryContext);
// Two-phase hint generation
const eligibility = await b.ClassifyForHint(text);
if (eligibility.category === HintCategory.HINTABLE) {
const hint = await b.GenerateHint(eligibility.topic, text, memory);
}See the Clairvoyant Agent Integration Guide for detailed patterns. Quick steps:
-
Create Tool (
apps/application/src/tools/yourTool.ts):- Call external API
- Validate with Zod schema
- Return typed response
-
Add BAML Route (
baml_src/route.baml):- Add enum value to
Router - Update routing prompt
- Add test cases
- Add enum value to
-
Create Handler (
apps/application/src/handlers/yourTool.ts):- Use
showTextDuringOperationfor UX - Call tool
- Format via BAML
- Display results
- (Optional) Add memory injection
- Use
-
Wire Routing (
apps/application/src/transcriptionFlow.ts):- Add case to switch statement
- Call handler with appropriate parameters
-
Regenerate BAML:
npx baml-cli generate- Add Pro Gating (if needed):
- Check
isProstatus - Show upgrade prompt for free users
- Check
- PCM to WAV conversion: Handles audio format conversion for Groq API
- Voice Activity Detection: MentraOS built-in VAD for start/stop triggers
- Buffer management: Efficient audio chunk concatenation
- Temporary file handling: Safe creation and cleanup of audio files
- Groq Whisper: High-quality speech-to-text transcription
- Multiple AI clients: OpenAI GPT-4o, Groq models via BAML configuration
- Structured responses: JSON mode for reliable AI output parsing
- Token optimization: Minimal data transfer with maximum information density
- Convex: Real-time database for user data, preferences, and analytics
- Polar: Subscription management and payment processing
- Honcho: Persistent memory and context management
- Elysia API: RESTful API server for session management
- Stale request protection: WeakMap-based runId tracking prevents outdated responses
- Timeout handling: Graceful fallbacks for location services and API calls
- Error state management: User-friendly error messages with automatic recovery
- Rate limiting: Built-in API rate limiting to prevent quota exhaustion
- Subscription validation: Automatic Pro status checks with fallbacks
# Run all apps in dev mode
bun run dev
# Run specific app
bun run app:dev # Application
bun run api:dev # API server
bun run web:dev # Web dashboard
# Build all
bun run build
# Lint all
bun run lint
# Format code
bun run format
# Type check
bun run check# Run BAML tests
bunx baml-cli test
# Run application tests
cd apps/application && bun test- Biome: Linting and formatting (configured in
biome.json) - TypeScript: Type safety across all packages
- Turbo: Monorepo task orchestration
@mentra/sdk: MentraOS application framework@honcho-ai/sdk: Persistent memory and context managementconvex: Real-time backend database@convex-dev/polar: Polar subscription integration
@boundaryml/baml: AI prompt engineering and routing frameworkopenai: OpenAI API client@tavily/core: Tavily web search APIgroq-sdk: Groq API client for Whisper and inference
wavefile: Audio format conversion utilities- Standard Node.js modules:
fs,path,crypto
react: UI frameworkvite: Build tooltailwindcss: Stylingrecharts: Charting library
bun: JavaScript runtime and package managerturbo: Monorepo build systemtypescript: Type safetyzod: Runtime validation
When adding new tools or features:
- Follow the established tool/handler pattern (see AGENTS.md)
- Add appropriate BAML prompts and routing
- Include proper error handling and UX states
- Test routing logic with BAML test cases
- Regenerate BAML client after changes (
npx baml-cli generate) - Consider memory injection for personalization (Pro feature)
- Add Pro gating if feature is premium
- Update this README if adding new capabilities
- Follow the memory injection pattern for context-aware responses
- For chat features: leverage
InterpretChatMessagefor auto-parsing - For session features: use session summarization and daily synthesis patterns
- For hints/passive features: implement two-phase classification to minimize costs
- Memory Injection Pattern: Complete guide to adding memory personalization to tools
- Clairvoyant Agent Integration Guide: Pattern for adding new tools and flows
See LICENSE.md for details.
- Pass the User ID to the Honcho Session to have user specific memories
- Shared memory between users
- Investigate a speaker diarization module
- Deploy to Production