This project turns fragmented personal health data into a decision-support product. It combines WHOOP and Withings data with analytics, API surfaces, and a chat layer so the user can move from “what happened?” to “what should I do next?” across training, recovery, sleep, and broader day-of planning.
The product goal is not just to collect biometrics. It is to make personal health data more actionable by:
- translating raw records into interpretable trends and coaching-style outputs
- connecting multiple systems into one consistent experience layer
- making the same underlying data accessible through API, dashboard, and conversational UX
- supporting better day-of decisions around training load, recovery, and activity planning
- Single source of truth for recovery, sleep, workouts, body composition, and related context
- Faster interpretation via dashboards, derived insights, and scenario-oriented analytics
- More accessible exploration through a chat interface for natural-language questions
- More usable health decisions by framing outputs around actions, not just raw measurements
- Data Integration -- ETL pipelines for WHOOP (recovery, sleep, workouts, cycles) and Withings (weight, body composition, heart rate)
- REST API -- FastAPI backend with interactive Swagger docs
- Analytics Pipeline -- Trend analysis, correlation analysis, and multiple linear regression models for recovery and HRV
- Chat Agent -- LangGraph-based agent for natural language queries against your health data
- Dashboard -- Web UI with charts, MLR coefficient tables, partial correlation charts, and correlation heatmaps
- Telegram Bot -- Optional Telegram transport for the shared agent conversation boundary
- Dashboard for quick review of trends and supporting visualizations
- API for structured access to raw and interpreted outputs
- Chat for question-driven exploration such as:
- “Show me my tennis workouts from 2025”
- “What’s my weight trend over the last 30 days?”
- “How has my recovery been this month?”
data-- Raw health records, context resources, and provider status under/api/v1/data/*insights-- Derived dashboards, analytics, scenarios, plans, and reports under/api/v1/insights/*agent-- Conversational/coaching requests under/api/v1/agent/*web-- Human-facing pages at/dashboard,/analytics, and/report
New integrations should target the canonical namespaces above. Legacy aliases still exist in a few places as temporary compatibility adapters during the migration.
WHOOP developer integrations in this repository target the WHOOP v2 API. The app's own route versioning under /api/v1/* is internal product/API namespacing and is separate from the upstream WHOOP developer API version.
curl -LsSf https://astral.sh/uv/install.sh | sh
uv syncCreate a .env file with your API credentials:
WHOOP_CLIENT_ID=your_whoop_client_id
WHOOP_CLIENT_SECRET=your_whoop_client_secret
WITHINGS_CLIENT_ID=your_withings_client_id
WITHINGS_CLIENT_SECRET=your_withings_client_secret
WITHINGS_CALLBACK_URL=http://localhost:8766/callback
OPENAI_API_KEY=your_openai_api_keyWHOOP uses OAuth 2.0 browser authentication. When first running ingestion, you may be redirected to complete the authorization-code flow in the browser.
If you want Telegram, API, chat UI, and LangSmith UI to share the same conversational and long-term memory, run a local Postgres instance on the Mac mini and add this to .env:
AGENT_POSTGRES_URL=postgresql://postgres:postgres@localhost:5432/whoop_agent?sslmode=disable
AGENT_PERSISTENCE_AUTO_SETUP=trueWith AGENT_POSTGRES_URL set, the agent will use Postgres-backed checkpointing and long-term memory storage. If it is not set, the agent falls back to in-memory persistence for development/tests.
Example local startup with Docker:
docker run --name whoop-agent-postgres \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_DB=whoop_agent \
-p 5432:5432 \
-d postgres:16Or use the built-in helper:
make postgres-upOr with Homebrew services:
brew install postgresql@16
brew services start postgresql@16
createdb whoop_agentmake etl
# or, for a full historical backfill:
make etl-fullThese are the canonical ingestion commands. make run is still available as a convenience launcher, but it mixes ETL and server startup in one interactive flow.
make serverThe API server exposes the canonical data, insights, and agent surfaces.
make analyticsUse this when you want to materialize analytics and insight outputs ahead of time.
make chatChat UI runs at http://localhost:7860.
make langgraph-devThis is for development and debugging workflows. It is not a separate product surface and should not be treated as the public agent API.
- Swagger UI:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc - OpenAPI tags:
data,insights, andagent
Once AGENT_POSTGRES_URL is configured, you can test durable shared memory end-to-end like this:
- Start the API:
make server- Start a client surface such as Telegram or chat UI:
make telegram-bot
# or
make chat-
In a coaching conversation, tell the agent something durable such as:
- “Remember that I’m training for a half marathon in October.”
- “Remember that I prefer blunt feedback.”
-
In a later message or from another client surface, ask something that should use that memory:
- “What should I focus on this week?”
- “What do you remember about my current goal?”
-
Restart the app process and repeat the follow-up question. With Postgres configured, the memory and thread state should survive the restart.
For API testing, you can also hit the agent routes directly:
curl -X POST http://localhost:8000/api/v1/agent/messages \
-H 'Content-Type: application/json' \
-d '{
"message": "Remember that I am training for Hyrox in September.",
"user_id": "manual-test-user"
}'Then ask a follow-up with the same user_id:
curl -X POST http://localhost:8000/api/v1/agent/messages \
-H 'Content-Type: application/json' \
-d '{
"message": "What should my training priority be?",
"user_id": "manual-test-user"
}'make etl-- Canonical incremental ingestion commandmake etl-full-- Canonical full-history ingestion commandmake server-- Canonical FastAPI server for thedata,insights, andagentsurfacesmake chat-- Canonical Gradio chat UI backed by the shared conversation boundarymake telegram-bot-- Telegram bot transport backed by the shared conversation boundarymake analytics-- Canonical analytics materialization commandmake langgraph-dev-- Development-only LangGraph toolinguv run whoop-withings-auth-- Canonical Withings re-auth utility
make run/uv run whoop-start-- Interactive launcher that combines ETL and server flowsmake dev-all-- Combined FastAPI + LangGraph dev helper
Use the primary commands for docs, automation, and repeatable workflows. Treat the convenience launchers as shortcuts rather than the canonical product entrypoints.
The LangChain Telegram page linked in some examples is a document loader for ingesting Telegram data; it is not the transport used to expose this agent over Telegram. In this repository, Telegram is an optional adapter over the same shared conversation boundary used by the API and Gradio chat UI.
Add the Telegram bot token to .env:
TELEGRAM_BOT_TOKEN=your_botfather_token_hereThe bot token is a secret. Do not paste it into chat, logs, screenshots, or source control. If it is ever exposed, rotate it in @BotFather and update .env.
Start the API and the Telegram bot:
make server
make telegram-botMessage the bot in a private Telegram chat, then use /whoami to see your Telegram user_id and chat_id. In a 1:1 bot chat these values may be identical — that is normal. After that, restrict the bot to your account by setting:
TELEGRAM_ALLOWED_USER_IDS=123456789
TELEGRAM_ALLOWED_CHAT_IDS=123456789Restart the bot after updating .env so the allowlists take effect.
Telegram runs as a separate transport process over the shared conversation boundary:
make server
make telegram-botmake dev-all starts FastAPI and LangGraph dev tooling, but it does not start the Telegram bot.
For an always-on local setup on macOS, use the persistent service helpers instead:
make services-up
make services-testThat installs launchd jobs for the API server, Telegram bot, the scheduled morning summary push, the proactive window evaluator, and the weakness reminder evaluator. Remove them with:
make services-down- Supports
/startand/whoami - Supports normal text chat with the shared health-data agent
- Reuses conversation context per Telegram chat
- Supports proactive pushes into the same shared Telegram conversation thread via
/api/v1/agent/telegram/push - Rejects non-private chats
- Uses Telegram-only HTML formatting for better rendering without changing Studio/API output
- Sends agent-generated image artifacts back to Telegram when available
- Voice messages: Send a voice note and the bot transcribes it (Whisper), processes it through the agent, and replies with both a voice note (TTS) and text
- Photo messages: Send a photo (with optional caption) and the bot interprets it using the vision-capable model in the context of your health data
Current limitations:
- The bot must be restarted after changing Telegram token or allowlist settings
- Voice replies use OpenAI TTS which has a ~2000 token input limit; very long responses fall back to text only
The Telegram adapter can silently ignore unauthorized users once the allowlists are set. Rotate any token that was ever pasted into chat, logs, or source control before relying on the bot.
You can send yourself a proactive Telegram message that goes through the shared conversation service:
uv run -m scripts.telegram_hello --prompt "set me up for the day"Or route the same flow through the running API server:
uv run -m scripts.telegram_hello --api --prompt "set me up for the day"You can send yourself a manual preview of the annual-review weakness reminder without consuming the once-per-workday scheduled send:
uv run python scripts/telegram_weakness_preview.pyOptionally preview a specific top-level bullet from weakness.md:
uv run python scripts/telegram_weakness_preview.py --point-number 2If you prefer richer Telegram formatting (bold/italics/bullets), you can request HTML formatting for the preview:
uv run python scripts/telegram_weakness_preview.py --point-number 2 --format htmlTo enable HTML formatting for all proactive pushes by default, set TELEGRAM_PROACTIVE_FORMAT=html in .env.
To rename the coach in proactive prompts, set COACH_NAME in .env.
- Run the focused validation slices for the migration work before cutting over.
- Start the API with
make serverand confirm/docsshows thedata,insights, andagentOpenAPI tags. - Smoke the canonical public flows:
GET /api/v1/data/recoveryGET /api/v1/insights/dashboard/dailyPOST /api/v1/agent/conversationsPOST /api/v1/agent/messages
- Smoke representative compatibility adapters such as
/workouts/latest,/recovery/latest,/dashboard/daily, and/api/daily-plan, and confirm theDeprecation,Sunset, andX-Canonical-Routeheaders advertise the canonical replacement. - Launch
make chat, send an initial message, then send a follow-up message and confirm the conversation resumes cleanly instead of starting a new thread. - Keep
make langgraph-devscoped to development/debugging workflows rather than rollout verification of the public product surface.
- The root now prioritizes core product files and entrypoints.
- Supporting guides live under
docs/to keep the submission easier to scan. - Local runtime artifacts such as tokens, logs, caches, virtual environments, and local databases are gitignored and not part of the deliverable.
Setup:
make install Install production dependencies
make dev Install with dev dependencies
make sync Sync/update dependencies
Run:
make run Convenience launcher (interactive ETL + server menu)
make server Primary FastAPI server command
make etl Primary ETL pipeline (incremental)
make etl-full Primary ETL pipeline (full load)
make chat Primary chat interface command
make telegram-bot Telegram bot adapter command
make analytics Primary analytics pipeline command
make langgraph-dev Development-only LangGraph dev server
make dev-all Convenience FastAPI + LangGraph dev launcher
make proactive-now Run the proactive window evaluator immediately
make weakness-now Run the weakness reminder evaluator immediately
make weakness-preview Send a manual weakness reminder preview to Telegram
Development:
make test Run tests with pytest
make test-cov Run tests with coverage report
make format Format code with black
make lint Lint with flake8
make typecheck Type check with mypy
make verify Run system verification
Maintenance:
make clean Clean cache files and build artifacts
make clean-all Clean everything including .venv
- WHOOP 401 errors -- Delete
.whoop_tokens.jsonand re-authenticate - Withings re-auth -- Run
uv run whoop-withings-auth - Telegram token rejected -- Re-copy the current token from
@BotFather, make sure.envcontains the full token with no quotes or truncation, then restartmake telegram-bot /whoamior formatting errors in Telegram -- Restart the bot after pulling the latest code; Telegram formatting is handled adapter-side and should not affect Studio/API output- Telegram access control not working -- Confirm
TELEGRAM_ALLOWED_USER_IDSandTELEGRAM_ALLOWED_CHAT_IDSare set in.env, then restart the bot - Looking for the right API? -- Use
/api/v1/data/*for raw records,/api/v1/insights/*for interpreted outputs, and/api/v1/agent/*for conversational requests - Need detailed implementation notes? -- Start in
docs/README.md
Documentation is organized in docs/:
docs/technical/-- API changes, migration notes, troubleshooting, implementation detaildocs/features/-- Feature specs and product behaviordocs/guides/-- Testing, plotting, and contribution workflow guides
The multiple linear regression module was inspired by idossha/whoop-insights.
MIT License. See LICENSE for details.
