Multi-agent discussion platform where multiple LLMs discuss a topic across rounds, with a user-designated Leader who summarizes each round and produces a final consensus, and a First Responder who speaks first each round. All agents are neutral — no pre-assigned positions.
Runs as both a standalone web app and a VS Code extension from a shared codebase.
- Pick a topic and select 2+ LLM models (Claude, GPT-4o, Gemini, etc.)
- Assign one model as Leader (summarizes rounds + writes consensus) and one as First Responder (speaks first)
- Choose number of rounds and provider mode (OpenRouter for live calls, or Demo mode)
- Watch the models discuss — each round ends with a leader summary, and the final round produces a consensus
All LLM calls are routed through OpenRouter — one API key gives access to models from Anthropic, OpenAI, Google, and more.
packages/
shared/ Core logic: discussion engine, LLM providers, types
ui/ Platform-agnostic UI: setup screen, discussion screen, styles
web/ Standalone web app entry point
extension/ VS Code extension entry point
- shared — async generator discussion engine, OpenRouter provider, mock provider, type definitions
- ui — DOM rendering + adapter interface (no framework, no React)
- web — runs the engine in-browser, settings stored in localStorage
- extension — engine runs in the extension host, webview renders the UI via postMessage bridge
npm install
npm run buildnpm run dev:webOpens at http://localhost:3000.
- Open the repo in VS Code
- Press F5 to launch the Extension Development Host
- In the new window: Cmd+Shift+P → "Foxpit: Start Discussion"
Select "OpenRouter (Live)" in the provider toggle and enter your API key. Settings persist in localStorage.
Settings are available under foxpit.* in VS Code settings:
foxpit.providerMode—"openrouter"or"mock"(default:"mock")foxpit.openrouterApiKey— your OpenRouter API key
Or use the in-panel provider toggle — settings sync to VS Code config automatically.
- Claude Sonnet 4 (
anthropic/claude-sonnet-4) - Claude Haiku 4.5 (
anthropic/claude-haiku-4.5) - GPT-4o (
openai/gpt-4o) - GPT-4o Mini (
openai/gpt-4o-mini) - Gemini 2.5 Flash (
google/gemini-2.5-flash)
The discussion engine (packages/shared/src/engine.ts) is an async generator that yields messages one at a time. The UI consumes them via an adapter interface — the web app implements it directly, the extension implements it over postMessage. The engine and UI are fully decoupled.
[Setup UI] → DiscussionConfig → [Adapter] → [Engine + Provider] → yields Message → [Adapter] → [Discussion UI]
MIT