Portable AI memory from your conversations and data exports.
Make your answers to the human feel truly personal — use the skill at SKILL.md.
pip install context-use
# or
uv tool install context-useStart the proxy and point any OpenAI-compatible client at it. Every conversation is automatically turned into memories.
context-use proxy --upstream-url https://api.openai.comWith --upstream-url, the proxy always forwards requests to that upstream URL, so your client only needs to talk to the local proxy:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8080/v1",
api_key="<your-openai-key>",
)
client.chat.completions.create(model="gpt-4o", messages=[...])If you omit --upstream-url, the proxy uses the request Host header instead.
Note
Only POST /v1/chat/completions and POST /v1/responses requests are enriched with memories. All other paths are forwarded transparently without modification.
Memories are generated in the background from each conversation and are used to automatically enrich future requests that flow through the proxy.
If you already have your own ASGI server (FastAPI, Starlette, etc.), you can
simply mount create_proxy_app:
from context_use import ContextUse, ContextProxy, create_proxy_app
ctx = ContextUse(storage=..., store=..., llm_client=...)
await ctx.init()
handler = ContextProxy(ctx)
asgi_app = create_proxy_app(handler)Bulk-import memories from your data exports. Use this to bootstrap your memory store with historical data.
context-use pipeline --quick <your-zipped-data-export>Important
You must have an export from any of the supported providers to use this command.
The quickstart mode uses the real-time API of the LLM provider — fast for small slices but susceptible to rate limits on large exports. Use the Full pipeline to process the complete data export without incurring in rate limits.
For full data export and cost-efficient batch processing.
context-use pipelineIngests the export and generates memories via the batch API of the LLM provider — significantly cheaper and more rate-limit-friendly than the real-time API used by quickstart. Typical runtime: 2–10 minutes. Memories are stored in SQLite and persist across sessions, enabling semantic search and the Personal agent.
context-use memories list
context-use memories search "hiking trips in 2024"
context-use memories exportTo ingest several archives without running the full pipeline each time, use ingest to parse them individually, then generate memories in one go:
context-use ingest chatgpt-export.zip
context-use ingest instagram-export.zip
context-use memories generateA multi-turn agent that operates over your full memory store.
context-use agent synthesise # generate higher-level pattern memories
context-use agent profile # compile a first-person profile
context-use agent ask "What topics do I keep coming back to across all my conversations?"context-use config --helpThe configuration is saved in a config file at <your-home-directory>/.config/context-use/config.toml.
- Follow the export guide for your provider in the supported providers table. The export is delivered as a ZIP file — do not extract it.
- Move or copy the ZIP into
context-use-data/input/:
context-use-data/
└── input/
└── your-data-export.zip ← place it here
| Provider | Status | Data types | Export guide |
|---|---|---|---|
| ChatGPT | Available | Conversations | Export your data |
| Claude | Available | Conversations | Export your data |
| Available | Media, Likes, DMs, Ads, Comments, Saved, Profile Searches, ... | Export your data | |
| Available | Searches, YouTube, Shopping, Lens, Discover | Export your data | |
| Netflix | Available | Viewing Activity, Search History, Ratings, My List, Messages, Preferences | Export your data |
| Airbnb | Available | Wishlists, Search History, Reviews, Reservations, Messages | Export your data |
Want another provider? Contribute it by pointing your coding agent to the Adding a Data Provider guide.
