Skip to content

SamoTech/memoryos

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

MemoryOS

🧠 MemoryOS

Your AI finally remembers you.

Local-first AI memory layer. 100% private. Zero cloud. Works with every LLM.


License: MIT Python FastAPI Next.js TypeScript

CI Release

Stars Forks Issues PRs Welcome Contributors

SQLite ChromaDB Ollama sentence-transformers Docker

Sponsor Made in Egypt


πŸ“– Table of Contents


😀 The Problem

You use ChatGPT, Claude, Cursor, and Gemini every single day.

But every new session starts from absolute zero.

The AI has no idea:

  • Who you are
  • What you're building
  • What decisions you made last week
  • That you hate Python 2 and love FastAPI
  • That you already tried that approach and it failed

This is the AI amnesia problem. MemoryOS fixes it.


✨ How It Works

╔═══════════════════════════════════════════════════════════╗
β•‘                                                           β•‘
β•‘  1. CAPTURE   β†’   Extension watches your AI chats        β•‘
β•‘                   (ChatGPT, Claude, Gemini, Cursor)      β•‘
β•‘                                                           β•‘
β•‘  2. STORE     β†’   Local SQLite + ChromaDB                β•‘
β•‘                   100% on your machine, never uploaded   β•‘
β•‘                                                           β•‘
β•‘  3. RETRIEVE  β†’   Hybrid search: semantic + keyword      β•‘
β•‘                   Inject context into any AI chat        β•‘
β•‘                                                           β•‘
β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

🎬 Demo

# Install
pip install memoryos && memoryos start

# You chat with ChatGPT: "I'm building a SaaS with Next.js and FastAPI"
# Extension silently captures this.

# Next day, in a new Claude session:
$ memoryos ask "what am I building?"

πŸ“‹ Relevant context from your memories:

[chatgpt | 2026-03-14] Building a SaaS with Next.js and FastAPI.
Decided to use Zustand for state management.
Using Vercel for frontend deployment.

Paste that context into your next AI session. It now knows everything.


⚑ Quick Install

One-line install (Linux / macOS)

curl -fsSL https://raw.githubusercontent.com/SamoTech/memoryos/main/scripts/install.sh | bash

Via pip

pip install memoryos
memoryos start
# β†’ Opens dashboard at http://localhost:3000
# β†’ API running at http://localhost:8765

Via Docker

git clone https://github.com/SamoTech/memoryos
cd memoryos
docker-compose up -d
# Dashboard: http://localhost:3000
# API:       http://localhost:8765

Requirements

Requirement Version
Python 3.11+
Node.js (dashboard only) 18+
RAM 512 MB minimum
Disk 500 MB (model included)

🌐 Browser Extension

Install

  1. Clone the repo
  2. Run bash scripts/build-extension.sh
  3. Open Chrome β†’ chrome://extensions/ β†’ Enable Developer mode
  4. Click Load unpacked β†’ select extension/dist/

Supported Sites (Auto-capture)

Site Selector Strategy Status
ChatGPT (chat.openai.com) data-message-author-role βœ… Stable
Claude (claude.ai) data-testid="human-turn" βœ… Stable
Gemini (gemini.google.com) .query-text + model-response βœ… Stable
Any site Generic DOM heuristics βš™οΈ Opt-in

How capture works

  1. MutationObserver detects new AI messages
  2. Content hash deduplication prevents duplicates
  3. Background service worker batches + queues
  4. Bulk POST to localhost:8765/api/v1/memories/bulk every 2 seconds
  5. Green dot flashes on successful capture

πŸ’» CLI Reference

# Server lifecycle
memoryos start                          # Start server + open dashboard
memoryos start --no-browser             # Headless start
memoryos stop                           # Stop server

# Memory operations
memoryos add "Decided to use Zustand"   # Add memory manually
memoryos add "Using FastAPI" -t "backend,python"  # With tags
memoryos forget <id>                    # Soft-delete a memory
memoryos pin <id>                       # Pin important memory

# Search & retrieval
memoryos search "react hooks"           # Semantic search
memoryos search "auth" --source claude  # Filter by source
memoryos search "deploy" -n 20          # More results
memoryos ask "what auth approach did I use?"  # Get AI-ready context

# Data management
memoryos stats                          # Show statistics
memoryos export --format markdown       # Export to Markdown
memoryos export --format json -o backup # Export to JSON file
memoryos export --format csv            # Export to CSV

πŸ”Œ API Reference

Full docs: docs/API.md | Interactive: http://localhost:8765/docs

GET    /health                           Server health + config

GET    /api/v1/memories                  List memories
POST   /api/v1/memories                  Add memory
POST   /api/v1/memories/bulk             Bulk add (used by extension)
GET    /api/v1/memories/{id}             Get memory
PUT    /api/v1/memories/{id}             Update memory
DELETE /api/v1/memories/{id}             Forget memory
POST   /api/v1/memories/{id}/pin         Toggle pin

GET    /api/v1/search?q=...              Hybrid semantic+keyword search
POST   /api/v1/search/similar            Find similar to text
GET    /api/v1/search/context?q=...      Get prompt-ready context

GET    /api/v1/sessions                  List sessions
POST   /api/v1/sessions                  Create session
GET    /api/v1/sessions/{id}             Session + memories
GET    /api/v1/sessions/{id}/summary     AI summary

GET    /api/v1/tags                      List tags
GET    /api/v1/tags/{name}/memories      Memories by tag

GET    /api/v1/stats                     Memory statistics
GET    /api/v1/export?format=...         Export (json/markdown/csv/obsidian)
POST   /api/v1/summarize                 Summarize text
POST   /api/v1/summarize/pending         Summarize all unsummarized

Quick API example

import requests

api = "http://localhost:8765"

# Add a memory
requests.post(f"{api}/api/v1/memories", json={
    "content": "Decided to use PostgreSQL instead of MySQL for the main DB",
    "source": "manual",
    "tags": ["database", "architecture"]
})

# Search
results = requests.get(f"{api}/api/v1/search", params={"q": "database choice"}).json()
for r in results:
    print(r["memory"]["content"], "β†’ score:", r["score"])

# Get context for your AI prompt
ctx = requests.get(f"{api}/api/v1/search/context", params={"q": "backend stack"}).json()
print(ctx["context"])

βš™οΈ Configuration

Edit ~/.memoryos/.env:

# ── Server ──────────────────────────────
HOST=127.0.0.1
PORT=8765
DASHBOARD_PORT=3000
DEBUG=false

# ── Storage ─────────────────────────────
DATA_DIR=~/.memoryos

# ── Embeddings ──────────────────────────
EMBEDDING_PROVIDER=local
EMBEDDING_MODEL=all-MiniLM-L6-v2

# ── Summarization ────────────────────────
SUMMARIZER_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama3

# GROQ_API_KEY=gsk_...
# OPENAI_API_KEY=sk-...

# ── Memory Behaviour ────────────────────
AUTO_SUMMARIZE=true
AUTO_SUMMARIZE_THRESHOLD=500
IMPORTANCE_SCORING=true
DATA_RETENTION_DAYS=0

Provider comparison

Provider Cost Speed Quality Privacy
Ollama (local) Free Medium Good βœ… 100% local
Groq Free tier Fast Great ☁️ API call
OpenAI Paid Fast Best ☁️ API call
No summarizer Free Instant None βœ… 100% local

✨ Features

πŸ”’ Privacy First

  • 100% local β€” data never leaves your machine by default
  • No telemetry, no analytics, no accounts, no sign-up
  • All data lives in ~/.memoryos/ β€” fully portable
  • Extension communicates only with localhost:8765
  • Open source β€” audit every line

πŸ” Hybrid Search Engine

  • Semantic search via ChromaDB + sentence-transformers (384-dim cosine)
  • Keyword search via SQLite FTS5 full-text index
  • Re-ranking: 0.7 Γ— semantic + 0.3 Γ— keyword Γ— importance Γ— recency Γ— pin_boost

πŸ€– Flexible AI Providers

  • Embeddings: Local all-MiniLM-L6-v2 (offline) or OpenAI
  • Summarization: Ollama (local) β†’ Groq β†’ OpenAI (auto-fallback)
  • Entity extraction: people, projects, tech, decisions, TODOs
  • Importance scoring: automatic signal detection

🌐 Browser Extension (MV3)

  • Chrome, Edge, Brave β€” auto-captures ChatGPT, Claude, Gemini
  • Smart deduplication, batch queue, popup with live search

πŸ“Š Dashboard (Next.js 14)

  • Memory grid, semantic search, stats, sessions, export, dark theme

πŸ” Memory Lifecycle

  • Pin, forget, data retention, access tracking, background summarization

πŸ›  Tech Stack

Layer Technology Why
Backend Python 3.11, FastAPI Async, fast, auto-docs
ORM SQLAlchemy 2 (async) Type-safe, async-native
Database SQLite + FTS5 Zero-config, portable
Vector DB ChromaDB Local-first, persistent
Embeddings sentence-transformers Runs offline, 384-dim
Summarization Ollama / Groq / OpenAI Fallback chain
Frontend Next.js 14, TypeScript SSR + ISR
Styling Tailwind CSS Utility-first
State TanStack React Query Cache + revalidation
Animation Framer Motion Smooth UX
Extension TypeScript, MV3 Modern, secure
CLI Click Pythonic, composable
CI/CD GitHub Actions Lint + test + publish

πŸ— Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                        Your Machine                        β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚  Extension   │───▢│  FastAPI + SQLAlchemy (async)  β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚  β”œβ”€β”€ SQLite + FTS5             β”‚  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚  β”œβ”€β”€ ChromaDB                  β”‚  β”‚
β”‚  β”‚  Dashboard   │───▢│  β”œβ”€β”€ sentence-transformers    β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚  └── Ollama / Groq / OpenAI   β”‚  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β”‚  β”‚  CLI         │───▢  ~/.memoryos/ (SQLite + Chroma + models) β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                                            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Full architecture docs: docs/ARCHITECTURE.md


🐳 Docker

docker-compose up -d
# Dashboard: http://localhost:3000
# API:       http://localhost:8765

πŸ—Ί Roadmap

v1.1

  • Firefox extension
  • Cursor IDE integration
  • Import from ChatGPT data export
  • Memory merge & deduplication

v1.2

  • Obsidian vault sync
  • Memory graph visualization
  • SQLCipher at-rest encryption

v2.0

  • Mobile companion app
  • Multi-user / team memory sharing
  • MCP server (Model Context Protocol)
  • VS Code extension

🀝 Contributing

git clone https://github.com/YOUR_USERNAME/memoryos
cd memoryos/backend
python -m venv .venv && source .venv/bin/activate
pip install -e '.[dev]'
pytest tests/ -v

Please read CONTRIBUTING.md before opening a PR.


πŸ” Security

  • API binds to 127.0.0.1 by default
  • CORS restricted to localhost and chrome-extension://
  • Do not expose port 8765 to the internet

Found a vulnerability? Open a GitHub Security Advisory before making it public.


πŸ“œ License

MIT License β€” see LICENSE for full text.

Copyright Β© 2026 Ossama Hashim


πŸ’– Support

If MemoryOS saves you time:

Sponsor on GitHub


"The palest ink is better than the best memory."

⭐ Star this repo if MemoryOS helps you!

Star History

About

🧠 Your AI finally remembers you. Local-first AI memory layer β€” persistent, searchable memory for any AI assistant. 100% private, zero cloud, works with every LLM.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors