diff --git a/README_TW.md b/README_TW.md
new file mode 100644
index 00000000..8ae4e7df
--- /dev/null
+++ b/README_TW.md
@@ -0,0 +1,344 @@
+# Mini Agent
+
+English | [中文](./README_CN.md)
+
+**Mini Agent** 是一個精簡但專業的演示專案,展示使用 MiniMax M2.5 模型構建代理的最佳實踐。它利用 Anthropic 相容的 API,完全支援交錯思考(interleaved thinking),以釋放 M2 強大的推理能力,處理冗長且複雜的任務。
+
+此專案配備了為穩健且智慧的代理開發體驗而設計的功能:
+
+* ✅ **完整代理執行循環**:一個完整且可靠的基础,配備用於檔案系統和 shell 操作的基本工具集。
+* ✅ **持久記憶**:一個主動的**對話筆記工具**確保代理在多個會話中保留關鍵資訊。
+* ✅ **智慧上下文管理**:自動摘要對話歷史,以處理可配置代幣上限的上下文,實現無限長的任務。
+* ✅ **Claude Skills 整合**:包含 15 個專業技能,用於文件、設計、測試和開發。
+* ✅ **MCP 工具整合**:原生支援 MCP,可用於知識圖譜存取和網路搜尋等工具。
+* ✅ **全面的日誌記錄**:每個請求、響應和工具執行的詳細日誌,方便調試。
+* ✅ **簡潔優雅的設計**:美麗的 CLI 和易於理解的程式碼庫,是構建進階代理的理想起點。
+
+## 目錄
+
+- [Mini Agent](#mini-agent)
+ - [目錄](#目錄)
+ - [快速開始](#快速開始)
+ - [1. 取得 API 金鑰](#1-取得-api-金鑰)
+ - [2. 選擇您的使用模式](#2-選擇您的使用模式)
+ - [🚀 快速開始模式(推薦給初學者)](#-快速開始模式推薦給初學者)
+ - [🔧 開發模式](#-開發模式)
+ - [ACP 與 Zed 編輯器整合(可選)](#acp-與-zed-編輯器整合可選)
+ - [使用範例](#使用範例)
+ - [任務執行](#任務執行)
+ - [使用 Claude 技能(例如 PDF 產生)](#使用-claude-技能例如-pdf-產生)
+ - [網路搜尋與摘要(MCP 工具)](#網路搜尋與摘要mcp-工具)
+ - [測試](#測試)
+ - [快速執行](#快速執行)
+ - [測試覆蓋率](#測試覆蓋率)
+ - [故障排除](#故障排除)
+ - [SSL 憑證錯誤](#ssl-憑證錯誤)
+ - [模組找不到錯誤](#模組找不到錯誤)
+ - [相關文件](#相關文件)
+ - [社群](#社群)
+ - [貢獻](#貢獻)
+ - [授權](#授權)
+ - [參考文獻](#參考文獻)
+
+## 快速開始
+
+### 1. 取得 API 金鑰
+
+MiniMax 提供全球和中國兩個平台。請根據您的網路環境選擇:
+
+| 版本 | 平台 | API Base |
+| -------- | ------------------------------------------------------------ | ---------------------------- |
+| **全球** | [https://platform.minimax.io](https://platform.minimax.io) | `https://api.minimax.io` |
+| **中國** | [https://platform.minimaxi.com](https://platform.minimaxi.com) | `https://api.minimaxi.com` |
+
+**取得 API 金鑰的步驟:**
+1. 訪問相應平台進行註冊和登入
+2. 前往 **帳戶管理 > API 金鑰**
+3. 點擊 **「建立新金鑰」**
+4. 複製並安全保存(金鑰只會顯示一次)
+
+> 💡 **提示**:記住您選擇的平台的 API Base 位址,設定時會需要用到
+
+### 2. 選擇您的使用模式
+
+**前置條件:安裝 uv**
+
+兩種使用模式都需要 uv。如果尚未安裝:
+
+```bash
+# macOS/Linux/WSL
+curl -LsSf https://astral.sh/uv/install.sh | sh
+
+# Windows (PowerShell)
+python -m pip install --user pipx
+python -m pipx ensurepath
+# 安裝後重啟 PowerShell
+
+# 安裝完成後,重啟終端機或執行:
+source ~/.bashrc # 或 ~/.zshrc (macOS/Linux)
+```
+
+我們提供兩種使用模式——根據您的需求選擇:
+
+#### 🚀 快速開始模式(推薦給初學者)
+
+非常適合想要快速嘗試 Mini Agent 而不想複製儲存庫或修改程式碼的使用者。
+
+**安裝:**
+
+```bash
+# 1. 直接從 GitHub 安裝
+uv tool install git+https://github.com/MiniMax-AI/Mini-Agent.git
+
+# 2. 執行設定腳本(自動建立設定檔)
+# macOS/Linux:
+curl -fsSL https://raw.githubusercontent.com/MiniMax-AI/Mini-Agent/main/scripts/setup-config.sh | bash
+
+# Windows (PowerShell):
+Invoke-WebRequest -Uri "https://raw.githubusercontent.com/MiniMax-AI/Mini-Agent/main/scripts/setup-config.ps1" -OutFile "$env:TEMP\setup-config.ps1"
+powershell -ExecutionPolicy Bypass -File "$env:TEMP\setup-config.ps1"
+```
+
+> 💡 **提示**:如果您想在本地開發或修改程式碼,請使用下面的「開發模式」
+
+**設定:**
+
+設定腳本會在 `~/.mini-agent/config/` 建立設定檔。編輯設定檔:
+
+```bash
+# 編輯設定檔
+nano ~/.mini-agent/config/config.yaml
+```
+
+填入您的 API 金鑰和相應的 API Base:
+
+```yaml
+api_key: "YOUR_API_KEY_HERE" # 步驟 1 的 API 金鑰
+api_base: "https://api.minimax.io" # 全球
+# api_base: "https://api.minimaxi.com" # 中國
+model: "MiniMax-M2.5"
+```
+
+**開始使用:**
+
+```bash
+mini-agent # 使用目前目錄作為工作區
+mini-agent --workspace /path/to/your/project # 指定工作區目錄
+mini-agent --version # 檢查版本
+
+# 管理命令
+uv tool upgrade mini-agent # 升級到最新版本
+uv tool uninstall mini-agent # 如有需要可解除安裝
+uv tool list # 查看所有已安裝的工具
+```
+
+#### 🔧 開發模式
+
+適合需要修改程式碼、新增功能或除錯的開發者。
+
+**安裝與設定:**
+
+```bash
+# 1. 複製儲存庫
+git clone https://github.com/MiniMax-AI/Mini-Agent.git
+cd Mini-Agent
+
+# 2. 安裝 uv(如果還沒有的話)
+# macOS/Linux:
+curl -LsSf https://astral.sh/uv/install.sh | sh
+# Windows (PowerShell):
+irm https://astral.sh/uv/install.ps1 | iex
+# 安裝後重啟終端機
+
+# 3. 同步依賴
+uv sync
+
+# 替代方案:手動安裝依賴(如果不使用 uv)
+# pip install -r requirements.txt
+# 或安裝所需套件:
+# pip install tiktoken pyyaml httpx pydantic requests prompt-toolkit mcp
+
+# 4. 初始化 Claude 技能(可選)
+git submodule update --init --recursive
+
+# 5. 複製設定範本
+```
+
+**macOS/Linux:**
+```bash
+cp mini_agent/config/config-example.yaml mini_agent/config/config.yaml
+```
+
+**Windows:**
+```powershell
+Copy-Item mini_agent\config\config-example.yaml mini_agent\config\config.yaml
+
+# 6. 編輯設定檔
+vim mini_agent/config/config.yaml # 或使用您喜歡的編輯器
+```
+
+填入您的 API 金鑰和相應的 API Base:
+
+```yaml
+api_key: "YOUR_API_KEY_HERE" # 步驟 1 的 API 金鑰
+api_base: "https://api.minimax.io" # 全球
+# api_base: "https://api.minimaxi.com" # 中國
+model: "MiniMax-M2.5"
+max_steps: 100
+workspace_dir: "./workspace"
+```
+
+> 📖 完整設定指南:請參閱 [config-example.yaml](mini_agent/config/config-example.yaml)
+
+**執行方式:**
+
+選擇您偏好的執行方式:
+
+```bash
+# 方式 1:直接作為模組執行(適合除錯)
+uv run python -m mini_agent.cli
+
+# 方式 2:以可編輯模式安裝(推薦)
+uv tool install -e .
+# 安裝後可從任何地方執行,程式碼變更會立即生效
+mini-agent
+mini-agent --workspace /path/to/your/project
+```
+
+> 📖 更多開發指南,請參閱 [開發指南](docs/DEVELOPMENT_GUIDE.md)
+
+> 📖 更多生產環境部署指南,請參閱 [生產指南](docs/PRODUCTION_GUIDE.md)
+
+## ACP 與 Zed 編輯器整合(可選)
+
+Mini Agent 支援[代理通訊協定(ACP)](https://github.com/modelcontextprotocol/protocol),可與 Zed 等程式碼編輯器整合。
+
+**在 Zed 編輯器中設定:**
+
+1. 以開發模式或作為工具安裝 Mini Agent
+2. 在您的 Zed `settings.json` 中新增:
+
+```json
+{
+ "agent_servers": {
+ "mini-agent": {
+ "command": "/path/to/mini-agent-acp"
+ }
+ }
+}
+```
+
+命令路徑應為:
+- 如果透過 `uv tool install` 安裝:使用 `which mini-agent-acp` 的輸出
+- 如果在開發模式: `./mini_agent/acp/server.py`
+
+**使用方法:**
+- 使用 `Ctrl+Shift+P` → "Agent: Toggle Panel" 開啟 Zed 的代理面板
+- 在代理下拉選單中選擇「mini-agent」
+- 直接在編輯器中與 Mini Agent 開始對話
+
+## 使用範例
+
+以下是 Mini Agent 可以做到的一些範例。
+
+### 任務執行
+
+*在此演示中,代理被要求建立一個簡單美觀的網頁並在瀏覽器中展示,展示了基本工具使用循環。*
+
+
+
+### 使用 Claude 技能(例如 PDF 產生)
+
+*這裡,代理利用 Claude 技能根據使用者請求建立專業文件(如 PDF 或 DOCX),展示其進階能力。*
+
+
+
+### 網路搜尋與摘要(MCP 工具)
+
+*此演示展示了代理使用其網路搜尋工具在網路上尋找最新資訊並為您摘要。*
+
+
+
+## 測試
+
+專案包含全面的測試案例,涵蓋單元測試、功能測試和整合測試。
+
+### 快速執行
+
+```bash
+# 執行所有測試
+pytest tests/ -v
+
+# 執行核心功能測試
+pytest tests/test_agent.py tests/test_note_tool.py -v
+```
+
+### 測試覆蓋率
+
+- ✅ **單元測試** - 工具類別、LLM 用戶端
+- ✅ **功能測試** - 對話筆記工具、MCP 載入
+- ✅ **整合測試** - 代理端到端執行
+- ✅ **外部服務** - Git MCP 伺服器載入
+
+
+## 故障排除
+
+### SSL 憑證錯誤
+
+如果您遇到 `[SSL: CERTIFICATE_VERIFY_FAILED]` 錯誤:
+
+**測試快速修復**(修改 `mini_agent/llm.py`):
+```python
+# 第 50 行:在 AsyncClient 中加入 verify=False
+async with httpx.AsyncClient(timeout=120.0, verify=False) as client:
+```
+
+**生產環境解決方案:**
+```bash
+# 更新憑證
+pip install --upgrade certifi
+
+# 或設定系統 Proxy/憑證
+```
+
+### 模組找不到錯誤
+
+請確保您是從專案目錄執行:
+```bash
+cd Mini-Agent
+python -m mini_agent.cli
+```
+
+## 相關文件
+
+- [開發指南](docs/DEVELOPMENT_GUIDE.md) - 詳細的開發和設定指南
+- [生產指南](docs/PRODUCTION_GUIDE.md) - 生產環境部署的最佳實踐
+
+## 社群
+
+加入 MiniMax 官方社群以獲得幫助、分享想法並保持更新:
+
+- **微信群**:掃描[聯絡我們](https://platform.minimaxi.com/docs/faq/contact-us)頁面上的 QR 碼加入
+
+## 貢獻
+
+歡迎提出問題和提交拉取請求!
+
+- [貢獻指南](CONTRIBUTING.md) - 如何貢獻
+- [行為準則](CODE_OF_CONDUCT.md) - 社群規範
+
+## 授權
+
+此專案根據 [MIT 授權](LICENSE) 授權。
+
+## 參考文獻
+
+- MiniMax API: https://platform.minimax.io/docs
+- MiniMax-M2: https://github.com/MiniMax-AI/MiniMax-M2
+- Anthropic API: https://docs.anthropic.com/claude/reference
+- Claude Skills: https://github.com/anthropics/skills
+- MCP Servers: https://github.com/modelcontextprotocol/servers
+
+---
+
+**⭐ 如果這個專案對您有幫助,請給它一個 Star!**
\ No newline at end of file
diff --git a/examples/01_basic_tools.py b/examples/01_basic_tools.py
index 6c611174..01015676 100644
--- a/examples/01_basic_tools.py
+++ b/examples/01_basic_tools.py
@@ -8,6 +8,7 @@
Based on: tests/test_tools.py
"""
+# pylint: disable=invalid-name
import asyncio
import tempfile
@@ -52,7 +53,7 @@ async def demo_read_tool():
result = await tool.execute(path=temp_path)
if result.success:
- print(f"✅ File read successfully")
+ print("✅ File read successfully")
print(f"Content:\n{result.content}")
else:
print(f"❌ Failed: {result.error}")
@@ -71,7 +72,7 @@ async def demo_edit_tool():
temp_path = f.name
try:
- print(f"Original content:\n{Path(temp_path).read_text()}\n")
+ print(f"Original content:\n{Path(temp_path).read_text(encoding='utf-8')}\n")
tool = EditTool()
result = await tool.execute(
@@ -79,8 +80,8 @@ async def demo_edit_tool():
)
if result.success:
- print(f"✅ File edited successfully")
- print(f"New content:\n{Path(temp_path).read_text()}")
+ print("✅ File edited successfully")
+ print(f"New content:\n{Path(temp_path).read_text(encoding='utf-8')}")
else:
print(f"❌ Failed: {result.error}")
finally:
@@ -99,7 +100,7 @@ async def demo_bash_tool():
print("\nCommand: ls -la")
result = await tool.execute(command="ls -la")
if result.success:
- print(f"✅ Command executed successfully")
+ print("✅ Command executed successfully")
print(f"Output:\n{result.content[:200]}...")
# Example 2: Get current directory
diff --git a/examples/02_simple_agent.py b/examples/02_simple_agent.py
index 5b7bb95f..2e5a6c10 100644
--- a/examples/02_simple_agent.py
+++ b/examples/02_simple_agent.py
@@ -5,8 +5,10 @@
Based on: tests/test_agent.py
"""
+# pylint: disable=invalid-name
import asyncio
+import traceback
import tempfile
from pathlib import Path
@@ -105,10 +107,8 @@ async def demo_file_creation():
else:
print("⚠️ File was not created (but agent may have completed differently)")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"❌ Error: {e}")
- import traceback
-
traceback.print_exc()
@@ -186,7 +186,7 @@ async def demo_bash_task():
print("=" * 60)
print(f"\nAgent's response:\n{result}\n")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"❌ Error: {e}")
diff --git a/examples/03_session_notes.py b/examples/03_session_notes.py
index d8850200..cb979271 100644
--- a/examples/03_session_notes.py
+++ b/examples/03_session_notes.py
@@ -5,6 +5,7 @@
Based on: tests/test_note_tool.py, tests/test_integration.py
"""
+# pylint: disable=invalid-name
import asyncio
import json
@@ -66,7 +67,7 @@ async def demo_direct_note_usage():
# Show the memory file
print("\n📄 Memory file content:")
print("=" * 60)
- notes = json.loads(Path(note_file).read_text())
+ notes = json.loads(Path(note_file).read_text(encoding='utf-8'))
print(json.dumps(notes, indent=2, ensure_ascii=False))
print("=" * 60)
@@ -183,7 +184,7 @@ async def demo_agent_with_notes():
else:
print("\n⚠️ No notes found")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"❌ Error: {e}")
return
@@ -224,7 +225,7 @@ async def demo_agent_with_notes():
print(" 2. Agent in Session 2 recalled previous notes")
print(" 3. Memory persists across agent instances via file")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"❌ Error: {e}")
diff --git a/examples/04_full_agent.py b/examples/04_full_agent.py
index 44aacec6..66652002 100644
--- a/examples/04_full_agent.py
+++ b/examples/04_full_agent.py
@@ -8,8 +8,11 @@
Based on: tests/test_integration.py
"""
+# pylint: disable=invalid-name
import asyncio
+import json
+import traceback
import tempfile
from pathlib import Path
@@ -96,7 +99,7 @@ async def demo_full_agent():
print(f"✓ Loaded {len(mcp_tools)} MCP tools")
else:
print("⚠️ No MCP tools configured (mcp.json is empty or disabled)")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"⚠️ MCP tools not loaded: {e}")
# Create agent
@@ -169,17 +172,13 @@ async def demo_full_agent():
# Show memory
if memory_file.exists():
- import json
-
- notes = json.loads(memory_file.read_text())
+ notes = json.loads(memory_file.read_text(encoding='utf-8'))
print(f"\n💾 Session notes recorded: {len(notes)}")
for note in notes:
print(f" - [{note['category']}] {note['content'][:60]}...")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"❌ Error during agent execution: {e}")
- import traceback
-
traceback.print_exc()
@@ -244,7 +243,7 @@ async def demo_interactive_mode():
try:
result = await agent.run()
print(f"Agent: {result}\n")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"Error: {e}")
break
diff --git a/examples/05_provider_selection.py b/examples/05_provider_selection.py
index 1424ced7..121bcd1b 100644
--- a/examples/05_provider_selection.py
+++ b/examples/05_provider_selection.py
@@ -3,8 +3,10 @@
This example demonstrates how to use the LLMClient wrapper with different
LLM providers (Anthropic or OpenAI) through the provider parameter.
"""
+# pylint: disable=invalid-name
import asyncio
+import traceback
from pathlib import Path
import yaml
@@ -43,7 +45,7 @@ async def demo_anthropic_provider():
print(f"💭 Thinking: {response.thinking}")
print(f"💬 Model: {response.content}")
print("✅ Anthropic provider demo completed")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"❌ Error: {e}")
@@ -78,7 +80,7 @@ async def demo_openai_provider():
print(f"💭 Thinking: {response.thinking}")
print(f"💬 Model: {response.content}")
print("✅ OpenAI provider demo completed")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"❌ Error: {e}")
@@ -110,7 +112,7 @@ async def demo_default_provider():
response = await client.generate(messages)
print(f"💬 Model: {response.content}")
print("✅ Default provider demo completed")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"❌ Error: {e}")
@@ -152,7 +154,7 @@ async def demo_provider_comparison():
print(f"🟢 OpenAI: {openai_response.content}")
print("\n✅ Provider comparison completed")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"❌ Error: {e}")
@@ -177,10 +179,8 @@ async def main():
print("\n✅ All demos completed successfully!")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"\n❌ Error: {e}")
- import traceback
-
traceback.print_exc()
diff --git a/examples/06_tool_schema_demo.py b/examples/06_tool_schema_demo.py
index cf6135aa..20a37703 100644
--- a/examples/06_tool_schema_demo.py
+++ b/examples/06_tool_schema_demo.py
@@ -2,8 +2,10 @@
This example demonstrates how to use the Tool base class and its schema methods.
"""
+# pylint: disable=invalid-name
import asyncio
+import traceback
from pathlib import Path
from typing import Any
@@ -274,10 +276,8 @@ async def main():
print("\n✅ All demos completed successfully!")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"\n❌ Error: {e}")
- import traceback
-
traceback.print_exc()
diff --git a/examples/README_TW.md b/examples/README_TW.md
new file mode 100644
index 00000000..2dc8bd1b
--- /dev/null
+++ b/examples/README_TW.md
@@ -0,0 +1,219 @@
+# Mini Agent 範例
+
+本目錄包含一系列循序漸進的範例,幫助您了解如何使用 Mini Agent 框架。
+
+## 📚 範例列表
+
+### 01_basic_tools.py - 基本工具使用
+
+**難度**:⭐ 初學者
+
+**內容**:
+- 如何直接使用 ReadTool、WriteTool、EditTool、BashTool
+- 不涉及 Agent 或 LLM,純粹的工具呼叫示範
+- 非常適合了解每個工具的基本功能
+
+**執行**:
+```bash
+python examples/01_basic_tools.py
+```
+
+**關鍵學習**:
+- 工具輸入參數格式
+- ToolResult 回傳結構
+- 錯誤處理方法
+
+---
+
+### 02_simple_agent.py - 簡單 Agent 使用
+
+**難度**:⭐⭐ 初學者至中級
+
+**內容**:
+- 建立最簡單的 Agent
+- 讓 Agent 執行檔案建立任務
+- 讓 Agent 執行 bash 命令任務
+- 了解 Agent 執行流程
+
+**執行**:
+```bash
+# 需要先設定 API 金鑰
+python examples/02_simple_agent.py
+```
+
+**關鍵學習**:
+- Agent 初始化流程
+- 如何向 Agent 分配任務
+- Agent 如何自主選擇工具
+- 任務完成標準
+
+**前置需求**:
+- API 金鑰已設定在 `mini_agent/config/config.yaml`
+
+---
+
+### 03_session_notes.py - 會話記事工具
+
+**難度**:⭐⭐⭐ 中級
+
+**內容**:
+- 直接使用會話記事工具(record_note、recall_notes)
+- Agent 使用會話記事來維持跨會話記憶
+- 展示兩個 Agent 實例如何共享記憶
+
+**執行**:
+```bash
+python examples/03_session_notes.py
+```
+
+**關鍵學習**:
+- 會話記事的工作原理
+- 記事分類管理(category)
+- 如何在系統提示中引導 Agent 使用記事
+- 跨會話記憶實作
+
+**亮點**:
+這是此專案的核心功能之一!展示了一個輕量但有效的會話記憶管理解決方案。
+
+---
+
+### 04_full_agent.py - 功能完整的 Agent
+
+**難度**:⭐⭐⭐⭐ 進階
+
+**內容**:
+- 完整設定包含所有功能的 Agent
+- 整合基本工具 + 會話記事 + MCP 工具
+- 複雜任務的完整執行流程
+- 多輪對話範例
+
+**執行**:
+```bash
+python examples/04_full_agent.py
+```
+
+**關鍵學習**:
+- 如何組合多個工具
+- MCP 工具的載入和使用
+- 複雜任務的分解與執行
+- 生產環境 Agent 設定
+
+**前置需求**:
+- API 金鑰已設定
+- (可選)MCP 工具已設定
+
+---
+
+## 🚀 快速開始
+
+### 1. 設定 API 金鑰
+
+```bash
+# 複製設定範本
+cp mini_agent/config/config-example.yaml mini_agent/config/config.yaml
+
+# 編輯設定檔並填入您的 MiniMax API 金鑰
+vim mini_agent/config/config.yaml
+```
+
+### 2. 執行您的第一個範例
+
+```bash
+# 不需要 API 金鑰的範例
+python examples/01_basic_tools.py
+
+# 需要 API 金鑰的範例
+python examples/02_simple_agent.py
+```
+
+### 3. 循序漸進的學習
+
+建議按數字順序學習:
+1. **01_basic_tools.py** - 了解工具
+2. **02_simple_agent.py** - 了解 Agent
+3. **03_session_notes.py** - 了解記憶管理
+4. **04_full_agent.py** - 了解完整系統
+
+---
+
+## 📖 與測試案例的關係
+
+這些範例都來自 `tests/` 目錄中的測試案例精煉而成:
+
+| 範例 | 基於測試 | 描述 |
+| ------------------- | ---------------------------------------------------- | ------------------------------- |
+| 01_basic_tools.py | tests/test_tools.py | 基本工具單元測試 |
+| 02_simple_agent.py | tests/test_agent.py | Agent 基本功能測試 |
+| 03_session_notes.py | tests/test_note_tool.py tests/test_integration.py | 會話記事工具測試 |
+| 04_full_agent.py | tests/test_integration.py | 完整整合測試 |
+
+---
+
+## 💡 建議的學習路徑
+
+### 路徑 1:快速開始
+1. 執行 `01_basic_tools.py` - 了解工具
+2. 執行 `02_simple_agent.py` - 執行您的第一個 Agent
+3. 使用 `mini-agent` 直接進入互動模式
+
+### 路徑 2:深入理解
+1. 閱讀並執行所有範例(01 → 04)
+2. 閱讀相應的測試案例(`tests/`)
+3. 閱讀核心實作程式碼(`mini_agent/`)
+4. 嘗試修改範例以實作您自己的功能
+
+### 路徑 3:生產環境應用
+1. 了解所有範例
+2. 閱讀[生產部署指南](../docs/PRODUCTION_GUIDE.md)
+3. 設定 MCP 工具和 Skills
+4. 根據需求擴充工具集
+
+---
+
+## 🔧 疑難排解
+
+### API 金鑰錯誤
+```
+❌ API key not configured in config.yaml
+```
+**解決方案**:確保您已在 `mini_agent/config/config.yaml` 中設定有效的 MiniMax API 金鑰
+
+### 找不到 config.yaml
+```
+❌ config.yaml not found
+```
+**解決方案**:
+```bash
+cp mini_agent/config/config-example.yaml mini_agent/config/config.yaml
+```
+
+### MCP 工具載入失敗
+```
+⚠️ MCP tools not loaded: [error message]
+```
+**解決方案**:MCP 工具是可選的,不影響基本功能。如果您需要它們,請參考主要 README 中的 MCP 設定章節。
+
+---
+
+## 📚 更多資源
+
+- [主要專案 README](../README.md) - 完整專案文件
+- [測試案例](../tests/) - 更多使用範例
+- [核心實作](../mini_agent/) - 原始碼
+- [生產指南](../docs/PRODUCTION_GUIDE.md) - 部署指南
+
+---
+
+## 🤝 貢獻範例
+
+如果您有好的使用範例,歡迎提交 PR!
+
+建議的新範例方向:
+- Web 搜尋整合範例(使用 MiniMax Search MCP)
+- Skills 使用範例(文件處理、設計等)
+- 自訂工具開發範例
+- 錯誤處理和重試機制範例
+
+---
+
+**⭐ 如果這些範例對您有幫助,請給專案一個 Star!**
\ No newline at end of file
diff --git a/mini_agent/agent.py b/mini_agent/agent.py
index b7d7feab..ceef08b7 100644
--- a/mini_agent/agent.py
+++ b/mini_agent/agent.py
@@ -2,6 +2,7 @@
import asyncio
import json
+import traceback
from pathlib import Path
from time import perf_counter
from typing import Optional
@@ -10,6 +11,7 @@
from .llm import LLMClient
from .logger import AgentLogger
+from .retry import RetryExhaustedError
from .schema import Message
from .tools.base import Tool, ToolResult
from .utils import calculate_display_width
@@ -128,7 +130,7 @@ def _estimate_tokens(self) -> int:
try:
# Use cl100k_base encoder (used by GPT-4 and most modern models)
encoding = tiktoken.get_encoding("cl100k_base")
- except Exception:
+ except Exception: # pylint: disable=broad-exception-caught
# Fallback: if tiktoken initialization fails, use simple estimation
return self._estimate_tokens_fallback()
@@ -313,7 +315,7 @@ async def _create_summary(self, messages: list[Message], round_num: int) -> str:
print(f"{Colors.BRIGHT_GREEN}✓ Summary for round {round_num} generated successfully{Colors.RESET}")
return summary_text
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"{Colors.BRIGHT_RED}✗ Summary generation failed for round {round_num}: {e}{Colors.RESET}")
# Use simple text summary on failure
return summary_content
@@ -353,7 +355,7 @@ async def run(self, cancel_event: Optional[asyncio.Event] = None) -> str:
await self._summarize_messages()
# Step header with proper width calculation
- BOX_WIDTH = 58
+ BOX_WIDTH = 58 # pylint: disable=invalid-name
step_text = f"{Colors.BOLD}{Colors.BRIGHT_CYAN}💭 Step {step + 1}/{self.max_steps}{Colors.RESET}"
step_display_width = calculate_display_width(step_text)
padding = max(0, BOX_WIDTH - 1 - step_display_width) # -1 for leading space
@@ -370,10 +372,8 @@ async def run(self, cancel_event: Optional[asyncio.Event] = None) -> str:
try:
response = await self.llm.generate(messages=self.messages, tools=tool_list)
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
# Check if it's a retry exhausted error
- from .retry import RetryExhaustedError
-
if isinstance(e, RetryExhaustedError):
error_msg = f"LLM call failed after {e.attempts} retries\nLast error: {str(e.last_exception)}"
print(f"\n{Colors.BRIGHT_RED}❌ Retry failed:{Colors.RESET} {error_msg}")
@@ -461,10 +461,8 @@ async def run(self, cancel_event: Optional[asyncio.Event] = None) -> str:
try:
tool = self.tools[function_name]
result = await tool.execute(**arguments)
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
# Catch all exceptions during tool execution, convert to failed ToolResult
- import traceback
-
error_detail = f"{type(e).__name__}: {str(e)}"
error_trace = traceback.format_exc()
result = ToolResult(
diff --git a/mini_agent/cli.py b/mini_agent/cli.py
index f060c9c2..c0497731 100644
--- a/mini_agent/cli.py
+++ b/mini_agent/cli.py
@@ -30,6 +30,7 @@
from mini_agent import LLMClient
from mini_agent.agent import Agent
from mini_agent.config import Config
+from mini_agent.retry import RetryConfig as RetryConfigBase
from mini_agent.schema import LLMProvider
from mini_agent.tools.base import Tool
from mini_agent.tools.bash_tool import BashKillTool, BashOutputTool, BashTool
@@ -138,7 +139,7 @@ def _open_directory_in_file_manager(directory: Path) -> None:
subprocess.run(["xdg-open", str(directory)], check=False)
except FileNotFoundError:
print(f"{Colors.YELLOW}Could not open file manager. Please navigate manually.{Colors.RESET}")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"{Colors.YELLOW}Error opening file manager: {e}{Colors.RESET}")
@@ -164,13 +165,13 @@ def read_log_file(filename: str) -> None:
print(content)
print(f"{Colors.DIM}{'─' * 80}{Colors.RESET}")
print(f"\n{Colors.GREEN}✅ End of file{Colors.RESET}\n")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"\n{Colors.RED}❌ Error reading file: {e}{Colors.RESET}\n")
def print_banner():
"""Print welcome banner with proper alignment"""
- BOX_WIDTH = 58
+ BOX_WIDTH = 58 # pylint: disable=invalid-name
banner_text = f"{Colors.BOLD}🤖 Mini Agent - Multi-turn Interactive Session{Colors.RESET}"
banner_width = calculate_display_width(banner_text)
@@ -222,7 +223,7 @@ def print_help():
def print_session_info(agent: Agent, workspace_dir: Path, model: str):
"""Print session information with proper alignment"""
- BOX_WIDTH = 58
+ BOX_WIDTH = 58 # pylint: disable=invalid-name
def print_info_line(text: str):
"""Print a single info line with proper padding"""
@@ -394,7 +395,7 @@ async def initialize_base_tools(config: Config):
print(f"{Colors.GREEN}✅ Loaded Skill tool (get_skill){Colors.RESET}")
else:
print(f"{Colors.YELLOW}⚠️ No available Skills found{Colors.RESET}")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"{Colors.YELLOW}⚠️ Failed to load Skills: {e}{Colors.RESET}")
# 4. MCP tools (loaded with priority search)
@@ -424,7 +425,7 @@ async def initialize_base_tools(config: Config):
print(f"{Colors.YELLOW}⚠️ No available MCP tools found{Colors.RESET}")
else:
print(f"{Colors.YELLOW}⚠️ MCP config file not found: {config.tools.mcp_config_path}{Colors.RESET}")
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"{Colors.YELLOW}⚠️ Failed to load MCP tools: {e}{Colors.RESET}")
print() # Empty line separator
@@ -479,7 +480,7 @@ async def _quiet_cleanup():
loop.set_exception_handler(lambda _loop, _ctx: None)
try:
await cleanup_mcp_connections()
- except Exception:
+ except Exception: # pylint: disable=broad-exception-caught
pass
@@ -531,13 +532,11 @@ async def run_agent(workspace_dir: Path, task: str = None):
print(f"{Colors.RED}❌ Error: {e}{Colors.RESET}")
print(f"{Colors.YELLOW}Please check the configuration file format{Colors.RESET}")
return
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"{Colors.RED}❌ Error: Failed to load configuration file: {e}{Colors.RESET}")
return
# 2. Initialize LLM client
- from mini_agent.retry import RetryConfig as RetryConfigBase
-
# Convert configuration format
retry_config = RetryConfigBase(
enabled=config.llm.retry.enabled,
@@ -620,7 +619,7 @@ def on_retry(exception: Exception, attempt: int):
agent.add_user_message(task)
try:
await agent.run()
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"\n{Colors.RED}❌ Error: {e}{Colors.RESET}")
finally:
print_stats(agent, session_start)
@@ -654,12 +653,12 @@ def _(event):
event.current_buffer.reset()
@kb.add("c-l") # Ctrl+L: Clear screen (optional bonus)
- def _(event):
+ def _(event): # pylint: disable=function-redefined
"""Clear the screen"""
event.app.renderer.clear()
@kb.add("c-j") # Ctrl+J (对应 Ctrl+Enter)
- def _(event):
+ def _(event): # pylint: disable=function-redefined
"""Insert a newline"""
event.current_buffer.insert_text("\n")
@@ -701,26 +700,26 @@ def _(event):
print_stats(agent, session_start)
break
- elif command == "/help":
+ if command == "/help":
print_help()
continue
- elif command == "/clear":
+ if command == "/clear":
# Clear message history but keep system prompt
old_count = len(agent.messages)
agent.messages = [agent.messages[0]] # Keep only system message
print(f"{Colors.GREEN}✅ Cleared {old_count - 1} messages, starting new session{Colors.RESET}\n")
continue
- elif command == "/history":
+ if command == "/history":
print(f"\n{Colors.BRIGHT_CYAN}Current session message count: {len(agent.messages)}{Colors.RESET}\n")
continue
- elif command == "/stats":
+ if command == "/stats":
print_stats(agent, session_start)
continue
- elif command == "/log" or command.startswith("/log "):
+ if command == "/log" or command.startswith("/log "):
# Parse /log command
parts = user_input.split(maxsplit=1)
if len(parts) == 1:
@@ -732,10 +731,9 @@ def _(event):
read_log_file(filename)
continue
- else:
- print(f"{Colors.RED}❌ Unknown command: {user_input}{Colors.RESET}")
- print(f"{Colors.DIM}Type /help to see available commands{Colors.RESET}\n")
- continue
+ print(f"{Colors.RED}❌ Unknown command: {user_input}{Colors.RESET}")
+ print(f"{Colors.DIM}Type /help to see available commands{Colors.RESET}\n")
+ continue
# Normal conversation - exit check
if user_input.lower() in ["exit", "quit", "q"]:
@@ -761,7 +759,7 @@ def esc_key_listener():
"""Listen for Esc key in a separate thread."""
if platform.system() == "Windows":
try:
- import msvcrt
+ import msvcrt # pylint: disable=import-error,import-outside-toplevel
while not esc_listener_stop.is_set():
if msvcrt.kbhit():
@@ -772,15 +770,15 @@ def esc_key_listener():
cancel_event.set()
break
esc_listener_stop.wait(0.05)
- except Exception:
+ except Exception: # pylint: disable=broad-exception-caught
pass
return
# Unix/macOS
try:
- import select
- import termios
- import tty
+ import select # pylint: disable=import-outside-toplevel
+ import termios # pylint: disable=import-error,import-outside-toplevel
+ import tty # pylint: disable=import-error,import-outside-toplevel
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
@@ -798,7 +796,7 @@ def esc_key_listener():
break
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
- except Exception:
+ except Exception: # pylint: disable=broad-exception-caught
pass
# Start Esc listener thread
@@ -833,7 +831,7 @@ def esc_key_listener():
print_stats(agent, session_start)
break
- except Exception as e:
+ except Exception as e: # pylint: disable=broad-exception-caught
print(f"\n{Colors.RED}❌ Error: {e}{Colors.RESET}")
print(f"{Colors.DIM}{'─' * 60}{Colors.RESET}\n")
diff --git a/mini_agent/config/mcp-example.json b/mini_agent/config/mcp-example.json
index 4147b554..87e361df 100644
--- a/mini_agent/config/mcp-example.json
+++ b/mini_agent/config/mcp-example.json
@@ -1,7 +1,209 @@
{
"mcpServers": {
+
+ "slack": {
+ "description": "Slack - 讀寫頻道、搜尋訊息、發送排程訊息",
+ "type": "http",
+ "url": "https://mcp.slack.com/mcp",
+ "oauth": {
+ "clientId": "1601185624273.8899143856786",
+ "callbackPort": 3118
+ },
+ "disabled": true
+ },
+
+ "linear": {
+ "description": "Linear - 專案管理、Issue 追蹤、里程碑",
+ "type": "http",
+ "url": "https://mcp.linear.app/mcp",
+ "disabled": true
+ },
+
+ "asana": {
+ "description": "Asana - 任務管理、專案狀態、Portfolio",
+ "type": "sse",
+ "url": "https://mcp.asana.com/sse",
+ "disabled": true
+ },
+
+ "notion": {
+ "description": "Notion - 頁面讀寫、資料庫查詢、搜尋。NOTION_TOKEN 填你的 Integration token",
+ "command": "npx",
+ "args": [
+ "-y",
+ "@notionhq/notion-mcp-server"
+ ],
+ "env": {
+ "OPENAPI_MCP_HEADERS": "{\"Authorization\": \"Bearer YOUR_NOTION_TOKEN\", \"Notion-Version\": \"2022-06-28\"}"
+ },
+ "disabled": true
+ },
+
+ "gmail": {
+ "description": "Gmail - 讀信、起草、搜尋郵件(首次啟動會跑 OAuth 授權)",
+ "command": "npx",
+ "args": [
+ "-y",
+ "@gongrzhe/server-gmail-autoauth-mcp"
+ ],
+ "disabled": true
+ },
+
+ "figma": {
+ "description": "Figma - 讀設計稿、取元件、截圖。需替換 YOUR_FIGMA_API_KEY",
+ "command": "npx",
+ "args": [
+ "-y",
+ "figma-developer-mcp",
+ "--figma-api-key=YOUR_FIGMA_API_KEY",
+ "--stdio"
+ ],
+ "disabled": true
+ },
+
+ "cloudflare": {
+ "description": "Cloudflare - Workers、D1、KV、R2 管理。需替換 YOUR_CLOUDFLARE_API_TOKEN",
+ "type": "http",
+ "url": "https://mcp.cloudflare.com/sse",
+ "env": {
+ "CLOUDFLARE_API_TOKEN": "YOUR_CLOUDFLARE_API_TOKEN"
+ },
+ "disabled": true
+ },
+
+ "memory": {
+ "description": "Memory - 知識圖譜長期記憶系統(對應 Claude Code 的 memory MCP)",
+ "command": "npx",
+ "args": [
+ "-y",
+ "@modelcontextprotocol/server-memory"
+ ],
+ "disabled": true
+ },
+
+ "filesystem": {
+ "description": "Filesystem - 讀寫本地檔案系統。將 YOUR_ALLOWED_DIR 替換為允許存取的路徑",
+ "command": "npx",
+ "args": [
+ "-y",
+ "@modelcontextprotocol/server-filesystem",
+ "YOUR_ALLOWED_DIR"
+ ],
+ "disabled": true
+ },
+
+ "sequential-thinking": {
+ "description": "Sequential Thinking - 分步驟推理,適合複雜分析任務",
+ "command": "npx",
+ "args": [
+ "-y",
+ "@modelcontextprotocol/server-sequential-thinking"
+ ],
+ "disabled": true
+ },
+
+ "brave-search": {
+ "description": "Brave Search - 網頁搜尋。需填入 BRAVE_API_KEY",
+ "command": "npx",
+ "args": [
+ "-y",
+ "@modelcontextprotocol/server-brave-search"
+ ],
+ "env": {
+ "BRAVE_API_KEY": "YOUR_BRAVE_API_KEY"
+ },
+ "disabled": true
+ },
+
+ "puppeteer": {
+ "description": "Puppeteer - 瀏覽器自動化、截圖、爬頁面",
+ "command": "npx",
+ "args": [
+ "-y",
+ "@modelcontextprotocol/server-puppeteer"
+ ],
+ "disabled": true
+ },
+
+ "playwright": {
+ "description": "Playwright - 進階瀏覽器自動化、E2E 測試(對應 Claude Code 的 playwright MCP)",
+ "command": "npx",
+ "args": [
+ "@playwright/mcp@latest"
+ ],
+ "disabled": true
+ },
+
+ "context7": {
+ "description": "Context7 - 即時取得套件最新文件,解決 AI 文件過時問題",
+ "command": "npx",
+ "args": [
+ "-y",
+ "@upstash/context7-mcp"
+ ],
+ "disabled": true
+ },
+
+ "github": {
+ "description": "GitHub - Repo 管理、PR、Issue(需 GitHub Copilot token)",
+ "type": "http",
+ "url": "https://api.githubcopilot.com/mcp/",
+ "headers": {
+ "Authorization": "Bearer YOUR_GITHUB_PERSONAL_ACCESS_TOKEN"
+ },
+ "disabled": true
+ },
+
+ "gitlab": {
+ "description": "GitLab - Repo 管理、MR、Pipeline",
+ "type": "http",
+ "url": "https://gitlab.com/api/v4/mcp",
+ "disabled": true
+ },
+
+ "firebase": {
+ "description": "Firebase - Firestore、Auth、Hosting 管理(需先登入 firebase-tools)",
+ "command": "npx",
+ "args": [
+ "-y",
+ "firebase-tools@latest",
+ "mcp"
+ ],
+ "disabled": true
+ },
+
+ "supabase": {
+ "description": "Supabase - PostgreSQL、Auth、Storage、Edge Functions",
+ "type": "http",
+ "url": "https://mcp.supabase.com/mcp",
+ "disabled": true
+ },
+
+ "terraform": {
+ "description": "Terraform - IaC 管理、Terraform Cloud。需填入 TFE_TOKEN 並安裝 Docker",
+ "command": "docker",
+ "args": [
+ "run",
+ "-i",
+ "--rm",
+ "-e", "TFE_TOKEN=YOUR_TFE_TOKEN",
+ "hashicorp/terraform-mcp-server:0.4.0"
+ ],
+ "disabled": true
+ },
+
+ "greptile": {
+ "description": "Greptile - AI 代碼庫搜尋與理解。需填入 GREPTILE_API_KEY",
+ "type": "http",
+ "url": "https://api.greptile.com/mcp",
+ "headers": {
+ "Authorization": "Bearer YOUR_GREPTILE_API_KEY"
+ },
+ "disabled": true
+ },
+
"minimax_search": {
- "description": "MiniMax Search - Powerful web search and intelligent browsing ⭐",
+ "description": "MiniMax Search - 網頁搜尋與智慧瀏覽(需 JINA / SERPER / MINIMAX API Key)",
"type": "stdio",
"command": "uvx",
"args": [
@@ -15,15 +217,6 @@
"MINIMAX_API_KEY": ""
},
"disabled": true
- },
- "memory": {
- "description": "Memory - Knowledge graph memory system (long-term memory based on graph database)",
- "command": "npx",
- "args": [
- "-y",
- "@modelcontextprotocol/server-memory"
- ],
- "disabled": true
}
}
-}
\ No newline at end of file
+}
diff --git a/mini_agent/retry.py b/mini_agent/retry.py
index 8b5f4e28..bf95f09f 100644
--- a/mini_agent/retry.py
+++ b/mini_agent/retry.py
@@ -109,16 +109,24 @@ async def wrapper(*args: Any, **kwargs: Any) -> Any:
# If this is the last attempt, don't retry
if attempt >= config.max_retries:
- logger.error(f"Function {func.__name__} retry failed, reached maximum retry count {config.max_retries}")
- raise RetryExhaustedError(e, attempt + 1)
+ logger.error(
+ "Function %s retry failed, reached maximum retry count %d",
+ func.__name__,
+ config.max_retries,
+ )
+ raise RetryExhaustedError(e, attempt + 1) from e
# Calculate delay time
delay = config.calculate_delay(attempt)
# Log
logger.warning(
- f"Function {func.__name__} call {attempt + 1} failed: {str(e)}, "
- f"retrying attempt {attempt + 2} after {delay:.2f} seconds"
+ "Function %s call %d failed: %s, retrying attempt %d after %.2f seconds",
+ func.__name__,
+ attempt + 1,
+ str(e),
+ attempt + 2,
+ delay,
)
# Call callback function
@@ -131,7 +139,7 @@ async def wrapper(*args: Any, **kwargs: Any) -> Any:
# Should not reach here in theory
if last_exception:
raise last_exception
- raise Exception("Unknown error")
+ raise RuntimeError("Unknown error") # W0719: use specific exception
return wrapper
diff --git a/mini_agent/skills/README_TW.md b/mini_agent/skills/README_TW.md
new file mode 100644
index 00000000..f64a391e
--- /dev/null
+++ b/mini_agent/skills/README_TW.md
@@ -0,0 +1,125 @@
+# Skills
+
+Skills 是包含指令、腳本和資源的資料夾,Claude 可以動態載入這些內容以提升專業任務的效能。Skills 教導 Claude 如何以可重複的方式完成特定任務,無論是使用您公司的品牌指南建立文件、使用您組織的特定工作流程分析資料,或是自動化個人任務。
+
+如需更多資訊,請參考:
+- [什麼是 skills?](https://support.claude.com/en/articles/12512176-what-are-skills)
+- [在 Claude 中使用 skills](https://support.claude.com/en/articles/12512180-using-skills-in-claude)
+- [如何建立自訂 skills](https://support.claude.com/en/articles/12512198-creating-custom-skills)
+- [使用 Agent Skills 為現實世界配備代理](https://anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills)
+
+# 關於此儲存庫
+
+此儲存庫包含示範 Claude skills 系統功能的範例 skills。這些範例範圍廣泛,從創意應用(藝術、音樂、設計)到技術任務(測試 Web 應用程式、MCP 伺服器生成)再到企業工作流程(通訊、品牌等)。
+
+每個 skill 都獨立存放在自己的目錄中,包含 `SKILL.md` 檔案,內含 Claude 使用的指令和中繼資料。瀏覽這些範例以取得建立您自己 skills 的靈感,或了解不同的模式和方法。
+
+此儲存庫中的範例 skills 是開源的(Apache 2.0)。我們 also included the document creation & editing skills that power [Claude's document capabilities](https://www.anthropic.com/news/create-files) under the hood in the [`document-skills/`](./document-skills/) folder. These are source-available, not open source, but we wanted to share these with developers as a reference for more complex skills that are actively used in a production AI application.
+
+**注意:**這些是作為靈感和學習的參考範例。它們展示的是通用功能,而非組織特定的工作流程或敏感內容。
+
+## 免责声明
+
+**這些 skills 僅提供用於示範和教育目的。** 雖然其中某些功能可能在 Claude 中可用,但您從 Claude 獲得的實作和行為可能與這些範例中顯示的有所不同。這些範例旨在說明模式和可能性。在依賴它們執行關鍵任務之前,請務必在您自己的環境中徹底測試 skills。
+
+# 範例 Skills
+
+此儲存庫包含各種展示不同功能的範例 skills:
+
+## 創意與設計
+- **algorithmic-art** - 使用 p5.js 建立生成藝術,包含 seeded randomness、flow fields 和粒子系統
+- **canvas-design** - 使用設計理念在 .png 和 .pdf 格式中設計美麗的視覺藝術
+- **slack-gif-creator** - 建立最佳化以符合 Slack 大小限制的動畫 GIF
+
+## 開發與技術
+- **artifacts-builder** - 使用 React、Tailwind CSS 和 shadcn/ui 元件建立複雜的 claude.ai HTML artifacts
+- **mcp-server** - 建立高質量 MCP 伺服器的指南,用於整合外部 API 和服務
+- **webapp-testing** - 使用 Playwright 測試本機 Web 應用程式,用於 UI 驗證和除錯
+
+## 企業與通訊
+- **brand-guidelines** - 將 Anthropic 的官方品牌顏色和字體應用於 artifacts
+- **internal-comms** - 撰寫內部通訊,如狀態報告、電子報和常見問題
+- **theme-factory** - 使用 10 種預設專業主題為 artifacts 設定樣式,或隨時產生自訂主題
+
+## 元技能
+- **skill-creator** - 建立有效延伸 Claude 能力的 skills 指南
+- **template-skill** - 可作為建立新技能起點的基本範本
+
+# Document Skills
+
+`document-skills/` 子目錄包含 Anthropic 開發的 skills,幫助 Claude 建立各種文件檔案格式。這些 skills 展示了處理複雜檔案格式和二進位資料的進階模式:
+
+- **docx** - 建立、編輯和分析 Word 文件,支援追蹤變更、註解、格式保留和文字擷取
+- **pdf** - 全面的 PDF 處理工具包,用於擷取文字和表格、建立新 PDF、合併/分割文件和處理表單
+- **pptx** - 建立、編輯和分析 PowerPoint 簡報,支援版面配置、範本、圖表和自動投影片生成
+- **xlsx** - 建立、編輯和分析 Excel 試算表,支援公式、格式、資料分析和視覺化
+
+**重要免责声明:**這些 document skills 是時間點快照,並非主動維護或更新。這些 skills 的版本隨 Claude 預先包含。它們主要作為參考範例,用來說明 Anthropic 如何開發更複雜的 skills 來處理二進位檔案格式和文件結構。
+
+# 在 Claude Code、Claude.ai 和 API 中嘗試
+
+## Claude Code
+
+您可以透過在 Claude Code 中執行以下命令來註冊此儲存庫作為 Claude Code Plugin marketplace:
+```
+/plugin marketplace add anthropics/skills
+```
+
+然後,要安裝特定的 skills 組合:
+1. 選擇 `Browse and install plugins`
+2. 選擇 `anthropic-agent-skills`
+3. 選擇 `document-skills` 或 `example-skills`
+4. 選擇 `Install now`
+
+或者,直接透過以下方式安裝任一 Plugin:
+```
+/plugin install document-skills@anthropic-agent-skills
+/plugin install example-skills@anthropic-agent-skills
+```
+
+安裝外掛後,您可以透過提及它來使用 skill。例如,如果您從 marketplace 安裝了 `document-skills` 外掛,您可以要求 Claude Code 做類似這樣的事情:「使用 PDF skill 從 path/to/some-file.pdf 擷取表單欄位」
+
+## Claude.ai
+
+這些範例 skills 都已提供給 Claude.ai 的付費方案使用。
+
+要使用此儲存庫中的任何 skill 或上傳自訂 skills,請參考[在 Claude 中使用 skills](https://support.claude.com/en/articles/12512180-using-skills-in-claude#h_a4222fa77b)中的說明。
+
+## Claude API
+
+您可以使用 Anthropic 的預設 skills,並透過 Claude API 上傳自訂 skills。請參考 [Skills API 快速入門](https://docs.claude.com/en/api/skills-guide#creating-a-skill) 了解更多資訊。
+
+# 建立基本 Skill
+
+建立 skills 很簡單 - 只需要一個包含 `SKILL.md` 檔案的資料夾,該檔案包含 YAML frontmatter 和指令。您可以使用此儲存庫中的 **template-skill** 作為起點:
+
+```markdown
+---
+name: my-skill-name
+description: 清楚說明此 skill 的功能以及何時使用它
+---
+
+# My Skill Name
+
+[在此新增 Claude 在此 skill 啟動時將遵循的指令]
+
+## 範例
+- 使用範例 1
+- 使用範例 2
+
+## 指南
+- 指南 1
+- 指南 2
+```
+
+frontmatter 只需要兩個欄位:
+- `name` - 您的 skill 的唯一識別碼(使用小寫字母,空格用連字符)
+- `description` - 完整說明 skill 的功能以及何時使用它
+
+下方的 markdown 內容包含 Claude 將遵循的指令、範例和指南。如需更多詳細資訊,請參考[如何建立自訂 skills](https://support.claude.com/en/articles/12512198-creating-custom-skills)。
+
+# 合作夥伴 Skills
+
+Skills 是教導 Claude 如何更善於使用特定軟體工具的絕佳方式。當我們看到來自合作夥伴的精彩範例 skills 時,我們可能會在此處重點介紹其中一些:
+
+- **Notion** - [Notion Skills for Claude](https://www.notion.so/notiondevs/Notion-Skills-for-Claude-28da4445d27180c7af1df7d8615723d0)
\ No newline at end of file
diff --git a/mini_agent/skills/create-pull-request/SKILL.md b/mini_agent/skills/create-pull-request/SKILL.md
new file mode 100644
index 00000000..9aa5add8
--- /dev/null
+++ b/mini_agent/skills/create-pull-request/SKILL.md
@@ -0,0 +1,211 @@
+---
+name: create-pull-request
+description: Create a GitHub pull request following project conventions. Use when the user asks to create a PR, submit changes for review, or open a pull request. Handles commit analysis, branch management, PR template usage, and PR creation using the gh CLI tool.
+---
+
+# Create Pull Request
+
+This skill guides you through creating a well-structured GitHub pull request that follows project conventions and best practices.
+
+## Prerequisites Check
+
+Before proceeding, verify the following:
+
+### 1. Check if `gh` CLI is installed
+
+```bash
+gh --version
+```
+
+If not installed, inform the user:
+> The GitHub CLI (`gh`) is required but not installed. Please install it:
+> - macOS: `brew install gh`
+> - Other: https://cli.github.com/
+
+### 2. Check if authenticated with GitHub
+
+```bash
+gh auth status
+```
+
+If not authenticated, guide the user to run `gh auth login`.
+
+### 3. Verify clean working directory
+
+```bash
+git status
+```
+
+If there are uncommitted changes, ask the user whether to:
+- Commit them as part of this PR
+- Stash them temporarily
+- Discard them (with caution)
+
+## Gather Context
+
+### 1. Identify the current branch
+
+```bash
+git branch --show-current
+```
+
+Ensure you're not on `main` or `master`. If so, ask the user to create or switch to a feature branch.
+
+### 2. Find the base branch
+
+```bash
+git remote show origin | grep "HEAD branch"
+```
+
+This is typically `main` or `master`.
+
+### 3. Analyze recent commits relevant to this PR
+
+```bash
+git log origin/main..HEAD --oneline --no-decorate
+```
+
+Review these commits to understand:
+- What changes are being introduced
+- The scope of the PR (single feature/fix or multiple changes)
+- Whether commits should be squashed or reorganized
+
+### 4. Review the diff
+
+```bash
+git diff origin/main..HEAD --stat
+```
+
+This shows which files changed and helps identify the type of change.
+
+## Information Gathering
+
+Before creating the PR, you need the following information. Check if it can be inferred from:
+- Commit messages
+- Branch name (e.g., `fix/issue-123`, `feature/new-login`)
+- Changed files and their content
+
+If any critical information is missing, use `ask_followup_question` to ask the user:
+
+### Required Information
+
+1. **Related Issue Number**: Look for patterns like `#123`, `fixes #123`, or `closes #123` in commit messages
+2. **Description**: What problem does this solve? Why were these changes made?
+3. **Type of Change**: Bug fix, new feature, breaking change, refactor, cosmetic, documentation, or workflow
+4. **Test Procedure**: How was this tested? What could break?
+
+### Example clarifying question
+
+If the issue number is not found:
+> I couldn't find a related issue number in the commit messages or branch name. What GitHub issue does this PR address? (Enter the issue number, e.g., "123" or "N/A" for small fixes)
+
+## Git Best Practices
+
+Before creating the PR, consider these best practices:
+
+### Commit Hygiene
+
+1. **Atomic commits**: Each commit should represent a single logical change
+2. **Clear commit messages**: Follow conventional commit format when possible
+3. **No merge commits**: Prefer rebasing over merging to keep history clean
+
+### Branch Management
+
+1. **Rebase on latest main** (if needed):
+ ```bash
+ git fetch origin
+ git rebase origin/main
+ ```
+
+2. **Squash if appropriate**: If there are many small "WIP" commits, consider interactive rebase:
+ ```bash
+ git rebase -i origin/main
+ ```
+ Only suggest this if commits appear messy and the user is comfortable with rebasing.
+
+### Push Changes
+
+Ensure all commits are pushed:
+```bash
+git push origin HEAD
+```
+
+If the branch was rebased, you may need:
+```bash
+git push origin HEAD --force-with-lease
+```
+
+## Create the Pull Request
+
+**IMPORTANT**: Read and use the PR template at `.github/pull_request_template.md`. The PR body format must **strictly match** the template structure. Do not deviate from the template format.
+
+When filling out the template:
+- Replace `#XXXX` with the actual issue number, or keep as `#XXXX` if no issue exists (for small fixes)
+- Fill in all sections with relevant information gathered from commits and context
+- Mark the appropriate "Type of Change" checkbox(es)
+- Complete the "Pre-flight Checklist" items that apply
+
+### Create PR with gh CLI
+
+**Use a temporary file for the PR body** to avoid shell escaping issues, newline problems, and other command-line flakiness:
+
+1. Write the PR body to a temporary file:
+ ```
+ /tmp/pr-body.md
+ ```
+
+2. Create the PR using the file:
+ ```bash
+ gh pr create --title "PR_TITLE" --body-file /tmp/pr-body.md --base main
+ ```
+
+3. Clean up the temporary file:
+ ```bash
+ rm /tmp/pr-body.md
+ ```
+
+For draft PRs:
+```bash
+gh pr create --title "PR_TITLE" --body-file /tmp/pr-body.md --base main --draft
+```
+
+**Why use a file?** Passing complex markdown with newlines, special characters, and checkboxes directly via `--body` is error-prone. The `--body-file` flag handles all content reliably.
+
+## Post-Creation
+
+After creating the PR:
+
+1. **Display the PR URL** so the user can review it
+2. **Remind about CI checks**: Tests and linting will run automatically
+3. **Suggest next steps**:
+ - Add reviewers if needed: `gh pr edit --add-reviewer USERNAME`
+ - Add labels if needed: `gh pr edit --add-label "bug"`
+
+## Error Handling
+
+### Common Issues
+
+1. **No commits ahead of main**: The branch has no changes to submit
+ - Ask if the user meant to work on a different branch
+
+2. **Branch not pushed**: Remote doesn't have the branch
+ - Push the branch first: `git push -u origin HEAD`
+
+3. **PR already exists**: A PR for this branch already exists
+ - Show the existing PR: `gh pr view`
+ - Ask if they want to update it instead
+
+4. **Merge conflicts**: Branch conflicts with base
+ - Guide user through resolving conflicts or rebasing
+
+## Summary Checklist
+
+Before finalizing, ensure:
+- [ ] `gh` CLI is installed and authenticated
+- [ ] Working directory is clean
+- [ ] All commits are pushed
+- [ ] Branch is up-to-date with base branch
+- [ ] Related issue number is identified, or placeholder is used
+- [ ] PR description follows the template exactly
+- [ ] Appropriate type of change is selected
+- [ ] Pre-flight checklist items are addressed
\ No newline at end of file
diff --git a/mini_agent/skills/doc-coauthoring/SKILL.md b/mini_agent/skills/doc-coauthoring/SKILL.md
new file mode 100644
index 00000000..a5a69839
--- /dev/null
+++ b/mini_agent/skills/doc-coauthoring/SKILL.md
@@ -0,0 +1,375 @@
+---
+name: doc-coauthoring
+description: Guide users through a structured workflow for co-authoring documentation. Use when user wants to write documentation, proposals, technical specs, decision docs, or similar structured content. This workflow helps users efficiently transfer context, refine content through iteration, and verify the doc works for readers. Trigger when user mentions writing docs, creating proposals, drafting specs, or similar documentation tasks.
+---
+
+# Doc Co-Authoring Workflow
+
+This skill provides a structured workflow for guiding users through collaborative document creation. Act as an active guide, walking users through three stages: Context Gathering, Refinement & Structure, and Reader Testing.
+
+## When to Offer This Workflow
+
+**Trigger conditions:**
+- User mentions writing documentation: "write a doc", "draft a proposal", "create a spec", "write up"
+- User mentions specific doc types: "PRD", "design doc", "decision doc", "RFC"
+- User seems to be starting a substantial writing task
+
+**Initial offer:**
+Offer the user a structured workflow for co-authoring the document. Explain the three stages:
+
+1. **Context Gathering**: User provides all relevant context while Claude asks clarifying questions
+2. **Refinement & Structure**: Iteratively build each section through brainstorming and editing
+3. **Reader Testing**: Test the doc with a fresh Claude (no context) to catch blind spots before others read it
+
+Explain that this approach helps ensure the doc works well when others read it (including when they paste it into Claude). Ask if they want to try this workflow or prefer to work freeform.
+
+If user declines, work freeform. If user accepts, proceed to Stage 1.
+
+## Stage 1: Context Gathering
+
+**Goal:** Close the gap between what the user knows and what Claude knows, enabling smart guidance later.
+
+### Initial Questions
+
+Start by asking the user for meta-context about the document:
+
+1. What type of document is this? (e.g., technical spec, decision doc, proposal)
+2. Who's the primary audience?
+3. What's the desired impact when someone reads this?
+4. Is there a template or specific format to follow?
+5. Any other constraints or context to know?
+
+Inform them they can answer in shorthand or dump information however works best for them.
+
+**If user provides a template or mentions a doc type:**
+- Ask if they have a template document to share
+- If they provide a link to a shared document, use the appropriate integration to fetch it
+- If they provide a file, read it
+
+**If user mentions editing an existing shared document:**
+- Use the appropriate integration to read the current state
+- Check for images without alt-text
+- If images exist without alt-text, explain that when others use Claude to understand the doc, Claude won't be able to see them. Ask if they want alt-text generated. If so, request they paste each image into chat for descriptive alt-text generation.
+
+### Info Dumping
+
+Once initial questions are answered, encourage the user to dump all the context they have. Request information such as:
+- Background on the project/problem
+- Related team discussions or shared documents
+- Why alternative solutions aren't being used
+- Organizational context (team dynamics, past incidents, politics)
+- Timeline pressures or constraints
+- Technical architecture or dependencies
+- Stakeholder concerns
+
+Advise them not to worry about organizing it - just get it all out. Offer multiple ways to provide context:
+- Info dump stream-of-consciousness
+- Point to team channels or threads to read
+- Link to shared documents
+
+**If integrations are available** (e.g., Slack, Teams, Google Drive, SharePoint, or other MCP servers), mention that these can be used to pull in context directly.
+
+**If no integrations are detected and in Claude.ai or Claude app:** Suggest they can enable connectors in their Claude settings to allow pulling context from messaging apps and document storage directly.
+
+Inform them clarifying questions will be asked once they've done their initial dump.
+
+**During context gathering:**
+
+- If user mentions team channels or shared documents:
+ - If integrations available: Inform them the content will be read now, then use the appropriate integration
+ - If integrations not available: Explain lack of access. Suggest they enable connectors in Claude settings, or paste the relevant content directly.
+
+- If user mentions entities/projects that are unknown:
+ - Ask if connected tools should be searched to learn more
+ - Wait for user confirmation before searching
+
+- As user provides context, track what's being learned and what's still unclear
+
+**Asking clarifying questions:**
+
+When user signals they've done their initial dump (or after substantial context provided), ask clarifying questions to ensure understanding:
+
+Generate 5-10 numbered questions based on gaps in the context.
+
+Inform them they can use shorthand to answer (e.g., "1: yes, 2: see #channel, 3: no because backwards compat"), link to more docs, point to channels to read, or just keep info-dumping. Whatever's most efficient for them.
+
+**Exit condition:**
+Sufficient context has been gathered when questions show understanding - when edge cases and trade-offs can be asked about without needing basics explained.
+
+**Transition:**
+Ask if there's any more context they want to provide at this stage, or if it's time to move on to drafting the document.
+
+If user wants to add more, let them. When ready, proceed to Stage 2.
+
+## Stage 2: Refinement & Structure
+
+**Goal:** Build the document section by section through brainstorming, curation, and iterative refinement.
+
+**Instructions to user:**
+Explain that the document will be built section by section. For each section:
+1. Clarifying questions will be asked about what to include
+2. 5-20 options will be brainstormed
+3. User will indicate what to keep/remove/combine
+4. The section will be drafted
+5. It will be refined through surgical edits
+
+Start with whichever section has the most unknowns (usually the core decision/proposal), then work through the rest.
+
+**Section ordering:**
+
+If the document structure is clear:
+Ask which section they'd like to start with.
+
+Suggest starting with whichever section has the most unknowns. For decision docs, that's usually the core proposal. For specs, it's typically the technical approach. Summary sections are best left for last.
+
+If user doesn't know what sections they need:
+Based on the type of document and template, suggest 3-5 sections appropriate for the doc type.
+
+Ask if this structure works, or if they want to adjust it.
+
+**Once structure is agreed:**
+
+Create the initial document structure with placeholder text for all sections.
+
+**If access to artifacts is available:**
+Use `create_file` to create an artifact. This gives both Claude and the user a scaffold to work from.
+
+Inform them that the initial structure with placeholders for all sections will be created.
+
+Create artifact with all section headers and brief placeholder text like "[To be written]" or "[Content here]".
+
+Provide the scaffold link and indicate it's time to fill in each section.
+
+**If no access to artifacts:**
+Create a markdown file in the working directory. Name it appropriately (e.g., `decision-doc.md`, `technical-spec.md`).
+
+Inform them that the initial structure with placeholders for all sections will be created.
+
+Create file with all section headers and placeholder text.
+
+Confirm the filename has been created and indicate it's time to fill in each section.
+
+**For each section:**
+
+### Step 1: Clarifying Questions
+
+Announce work will begin on the [SECTION NAME] section. Ask 5-10 clarifying questions about what should be included:
+
+Generate 5-10 specific questions based on context and section purpose.
+
+Inform them they can answer in shorthand or just indicate what's important to cover.
+
+### Step 2: Brainstorming
+
+For the [SECTION NAME] section, brainstorm [5-20] things that might be included, depending on the section's complexity. Look for:
+- Context shared that might have been forgotten
+- Angles or considerations not yet mentioned
+
+Generate 5-20 numbered options based on section complexity. At the end, offer to brainstorm more if they want additional options.
+
+### Step 3: Curation
+
+Ask which points should be kept, removed, or combined. Request brief justifications to help learn priorities for the next sections.
+
+Provide examples:
+- "Keep 1,4,7,9"
+- "Remove 3 (duplicates 1)"
+- "Remove 6 (audience already knows this)"
+- "Combine 11 and 12"
+
+**If user gives freeform feedback** (e.g., "looks good" or "I like most of it but...") instead of numbered selections, extract their preferences and proceed. Parse what they want kept/removed/changed and apply it.
+
+### Step 4: Gap Check
+
+Based on what they've selected, ask if there's anything important missing for the [SECTION NAME] section.
+
+### Step 5: Drafting
+
+Use `str_replace` to replace the placeholder text for this section with the actual drafted content.
+
+Announce the [SECTION NAME] section will be drafted now based on what they've selected.
+
+**If using artifacts:**
+After drafting, provide a link to the artifact.
+
+Ask them to read through it and indicate what to change. Note that being specific helps learning for the next sections.
+
+**If using a file (no artifacts):**
+After drafting, confirm completion.
+
+Inform them the [SECTION NAME] section has been drafted in [filename]. Ask them to read through it and indicate what to change. Note that being specific helps learning for the next sections.
+
+**Key instruction for user (include when drafting the first section):**
+Provide a note: Instead of editing the doc directly, ask them to indicate what to change. This helps learning of their style for future sections. For example: "Remove the X bullet - already covered by Y" or "Make the third paragraph more concise".
+
+### Step 6: Iterative Refinement
+
+As user provides feedback:
+- Use `str_replace` to make edits (never reprint the whole doc)
+- **If using artifacts:** Provide link to artifact after each edit
+- **If using files:** Just confirm edits are complete
+- If user edits doc directly and asks to read it: mentally note the changes they made and keep them in mind for future sections (this shows their preferences)
+
+**Continue iterating** until user is satisfied with the section.
+
+### Quality Checking
+
+After 3 consecutive iterations with no substantial changes, ask if anything can be removed without losing important information.
+
+When section is done, confirm [SECTION NAME] is complete. Ask if ready to move to the next section.
+
+**Repeat for all sections.**
+
+### Near Completion
+
+As approaching completion (80%+ of sections done), announce intention to re-read the entire document and check for:
+- Flow and consistency across sections
+- Redundancy or contradictions
+- Anything that feels like "slop" or generic filler
+- Whether every sentence carries weight
+
+Read entire document and provide feedback.
+
+**When all sections are drafted and refined:**
+Announce all sections are drafted. Indicate intention to review the complete document one more time.
+
+Review for overall coherence, flow, completeness.
+
+Provide any final suggestions.
+
+Ask if ready to move to Reader Testing, or if they want to refine anything else.
+
+## Stage 3: Reader Testing
+
+**Goal:** Test the document with a fresh Claude (no context bleed) to verify it works for readers.
+
+**Instructions to user:**
+Explain that testing will now occur to see if the document actually works for readers. This catches blind spots - things that make sense to the authors but might confuse others.
+
+### Testing Approach
+
+**If access to sub-agents is available (e.g., in Claude Code):**
+
+Perform the testing directly without user involvement.
+
+### Step 1: Predict Reader Questions
+
+Announce intention to predict what questions readers might ask when trying to discover this document.
+
+Generate 5-10 questions that readers would realistically ask.
+
+### Step 2: Test with Sub-Agent
+
+Announce that these questions will be tested with a fresh Claude instance (no context from this conversation).
+
+For each question, invoke a sub-agent with just the document content and the question.
+
+Summarize what Reader Claude got right/wrong for each question.
+
+### Step 3: Run Additional Checks
+
+Announce additional checks will be performed.
+
+Invoke sub-agent to check for ambiguity, false assumptions, contradictions.
+
+Summarize any issues found.
+
+### Step 4: Report and Fix
+
+If issues found:
+Report that Reader Claude struggled with specific issues.
+
+List the specific issues.
+
+Indicate intention to fix these gaps.
+
+Loop back to refinement for problematic sections.
+
+---
+
+**If no access to sub-agents (e.g., claude.ai web interface):**
+
+The user will need to do the testing manually.
+
+### Step 1: Predict Reader Questions
+
+Ask what questions people might ask when trying to discover this document. What would they type into Claude.ai?
+
+Generate 5-10 questions that readers would realistically ask.
+
+### Step 2: Setup Testing
+
+Provide testing instructions:
+1. Open a fresh Claude conversation: https://claude.ai
+2. Paste or share the document content (if using a shared doc platform with connectors enabled, provide the link)
+3. Ask Reader Claude the generated questions
+
+For each question, instruct Reader Claude to provide:
+- The answer
+- Whether anything was ambiguous or unclear
+- What knowledge/context the doc assumes is already known
+
+Check if Reader Claude gives correct answers or misinterprets anything.
+
+### Step 3: Additional Checks
+
+Also ask Reader Claude:
+- "What in this doc might be ambiguous or unclear to readers?"
+- "What knowledge or context does this doc assume readers already have?"
+- "Are there any internal contradictions or inconsistencies?"
+
+### Step 4: Iterate Based on Results
+
+Ask what Reader Claude got wrong or struggled with. Indicate intention to fix those gaps.
+
+Loop back to refinement for any problematic sections.
+
+---
+
+### Exit Condition (Both Approaches)
+
+When Reader Claude consistently answers questions correctly and doesn't surface new gaps or ambiguities, the doc is ready.
+
+## Final Review
+
+When Reader Testing passes:
+Announce the doc has passed Reader Claude testing. Before completion:
+
+1. Recommend they do a final read-through themselves - they own this document and are responsible for its quality
+2. Suggest double-checking any facts, links, or technical details
+3. Ask them to verify it achieves the impact they wanted
+
+Ask if they want one more review, or if the work is done.
+
+**If user wants final review, provide it. Otherwise:**
+Announce document completion. Provide a few final tips:
+- Consider linking this conversation in an appendix so readers can see how the doc was developed
+- Use appendices to provide depth without bloating the main doc
+- Update the doc as feedback is received from real readers
+
+## Tips for Effective Guidance
+
+**Tone:**
+- Be direct and procedural
+- Explain rationale briefly when it affects user behavior
+- Don't try to "sell" the approach - just execute it
+
+**Handling Deviations:**
+- If user wants to skip a stage: Ask if they want to skip this and write freeform
+- If user seems frustrated: Acknowledge this is taking longer than expected. Suggest ways to move faster
+- Always give user agency to adjust the process
+
+**Context Management:**
+- Throughout, if context is missing on something mentioned, proactively ask
+- Don't let gaps accumulate - address them as they come up
+
+**Artifact Management:**
+- Use `create_file` for drafting full sections
+- Use `str_replace` for all edits
+- Provide artifact link after every change
+- Never use artifacts for brainstorming lists - that's just conversation
+
+**Quality over Speed:**
+- Don't rush through stages
+- Each iteration should make meaningful improvements
+- The goal is a document that actually works for readers
diff --git a/mini_agent/skills/mem0-codex/SKILL.md b/mini_agent/skills/mem0-codex/SKILL.md
new file mode 100644
index 00000000..036f1a24
--- /dev/null
+++ b/mini_agent/skills/mem0-codex/SKILL.md
@@ -0,0 +1,62 @@
+---
+name: mem0-codex
+description: >
+ Mem0 persistent memory integration for Codex. Automatically retrieve relevant
+ memories at the start of each task, store key learnings when tasks complete,
+ and capture session state before context is lost. Use the mem0 MCP tools
+ (add_memory, search_memories, get_memories, etc.) for all memory operations.
+---
+
+# Mem0 Memory Protocol for Codex
+
+You have access to persistent memory via the mem0 MCP tools. Follow this protocol to maintain context across sessions.
+
+## On every new task
+
+1. Call `search_memories` with a query related to the current task or project to load relevant context.
+2. Review returned memories to understand what has been learned in prior sessions.
+3. If appropriate, call `get_memories` to browse all stored memories for this user.
+
+## After completing significant work
+
+Extract key learnings and store them using the `add_memory` tool:
+
+- **Decisions made** -> Include metadata `{"type": "decision"}`
+- **Strategies that worked** -> Include metadata `{"type": "task_learning"}`
+- **Failed approaches** -> Include metadata `{"type": "anti_pattern"}`
+- **User preferences observed** -> Include metadata `{"type": "user_preference"}`
+- **Environment/setup discoveries** -> Include metadata `{"type": "environmental"}`
+- **Conventions established** -> Include metadata `{"type": "convention"}`
+
+Memories can be as detailed as needed -- include full context, reasoning, code snippets, file paths, and examples. Longer, searchable memories are more valuable than vague one-liners.
+
+## Before losing context
+
+If context is about to be compacted or the session is ending, store a comprehensive session summary:
+
+```
+## Session Summary
+
+### User's Goal
+[What the user originally asked for]
+
+### What Was Accomplished
+[Numbered list of tasks completed]
+
+### Key Decisions Made
+[Architectural choices, trade-offs discussed]
+
+### Files Created or Modified
+[Important file paths with what changed]
+
+### Current State
+[What is in progress, pending items, next steps]
+```
+
+Include metadata: `{"type": "session_state"}`
+
+## Memory hygiene
+
+- Do NOT write to MEMORY.md or any file-based memory. Use mem0 MCP tools exclusively.
+- Only store genuinely useful learnings. Skip trivial interactions.
+- Use specific, searchable language in memory content.
diff --git a/mini_agent/skills/mem0/LICENSE b/mini_agent/skills/mem0/LICENSE
new file mode 100644
index 00000000..78c99ae2
--- /dev/null
+++ b/mini_agent/skills/mem0/LICENSE
@@ -0,0 +1,189 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but not
+ limited to compiled object code, generated documentation, and
+ conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work.
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to the Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by the Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding any notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ Copyright 2024 Mem0.ai
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/mini_agent/skills/mem0/README.md b/mini_agent/skills/mem0/README.md
new file mode 100644
index 00000000..f1cb1380
--- /dev/null
+++ b/mini_agent/skills/mem0/README.md
@@ -0,0 +1,73 @@
+# Mem0 Skill for Claude
+
+Add persistent memory to any AI application in minutes using [Mem0 Platform](https://app.mem0.ai).
+
+## What This Skill Does
+
+When installed, Claude can:
+
+- **Set up Mem0** in your Python or TypeScript project
+- **Integrate memory** into your existing AI app (LangChain, CrewAI, Vercel AI, OpenAI Agents, LangGraph, LlamaIndex, etc.)
+- **Generate working code** using real API references and tested patterns
+- **Search live docs** on demand for the latest Mem0 documentation
+
+## Installation
+
+This skill is included automatically when you install the Mem0 plugin:
+
+```
+/plugin marketplace add mem0ai/mem0
+/plugin install mem0@mem0-plugins
+```
+
+See the [plugin README](../../README.md) for full setup instructions.
+
+### Prerequisites
+
+- A Mem0 Platform API key ([Get one here](https://app.mem0.ai/dashboard/api-keys))
+- Python 3.10+ or Node.js 18+
+- Set the environment variable:
+
+ ```bash
+ export MEM0_API_KEY="m0-your-api-key"
+ ```
+
+## Quick Start
+
+After installing, just ask Claude:
+
+- "Set up mem0 in my project"
+- "Add memory to my chatbot"
+- "Help me search user memories with filters"
+- "Integrate mem0 with my LangChain app"
+- "Add graph memory to track entity relationships"
+
+## What's Inside
+
+```text
+skills/mem0/
+├── SKILL.md # Skill definition and instructions
+├── README.md # This file
+├── LICENSE # Apache-2.0
+├── scripts/
+│ └── mem0_doc_search.py # Search live Mem0 docs on demand
+└── references/ # Documentation (loaded on demand)
+ ├── quickstart.md # Full quickstart (Python, TS, cURL)
+ ├── sdk-guide.md # All SDK methods (Python + TypeScript)
+ ├── api-reference.md # REST endpoints, filters, memory object
+ ├── architecture.md # Processing pipeline, lifecycle, scoping, performance
+ ├── features.md # Retrieval, graph, categories, MCP, webhooks, multimodal
+ ├── integration-patterns.md # LangChain, CrewAI, Vercel AI, LangGraph, LlamaIndex, etc.
+ └── use-cases.md # 7 real-world patterns with Python + TypeScript code
+```
+
+## Links
+
+- [Mem0 Platform Dashboard](https://app.mem0.ai)
+- [Mem0 Documentation](https://docs.mem0.ai)
+- [Mem0 GitHub](https://github.com/mem0ai/mem0)
+- [API Reference](https://docs.mem0.ai/api-reference)
+
+## License
+
+Apache-2.0
diff --git a/mini_agent/skills/mem0/SKILL.md b/mini_agent/skills/mem0/SKILL.md
new file mode 100644
index 00000000..cec1311c
--- /dev/null
+++ b/mini_agent/skills/mem0/SKILL.md
@@ -0,0 +1,156 @@
+---
+name: mem0
+description: >
+ Integrate Mem0 Platform into AI applications for persistent memory, personalization, and semantic search.
+ Use this skill when the user mentions "mem0", "memory layer", "remember user preferences",
+ "persistent context", "personalization", or needs to add long-term memory to chatbots, agents,
+ or AI apps. Covers Python and TypeScript SDKs, framework integrations (LangChain, CrewAI,
+ Vercel AI SDK, OpenAI Agents SDK, Pipecat), and the full Platform API. Use even when the user
+ doesn't explicitly say "mem0" but describes needing conversation memory, user context retention,
+ or knowledge retrieval across sessions.
+license: Apache-2.0
+metadata:
+ author: mem0ai
+ version: "0.1.0"
+ category: ai-memory
+ tags: "memory, personalization, ai, python, typescript, vector-search"
+compatibility: Requires Python 3.10+ or Node.js 18+, pip install mem0ai or npm install mem0ai, MEM0_API_KEY env var, and internet access to api.mem0.ai
+---
+
+# Mem0 Platform Integration
+
+Mem0 is a managed memory layer for AI applications. It stores, retrieves, and manages user memories via API — no infrastructure to deploy.
+
+## Step 1: Install and authenticate
+
+**Python:**
+```bash
+pip install mem0ai
+export MEM0_API_KEY="m0-your-api-key"
+```
+
+**TypeScript/JavaScript:**
+```bash
+npm install mem0ai
+export MEM0_API_KEY="m0-your-api-key"
+```
+
+Get an API key at: https://app.mem0.ai/dashboard/api-keys
+
+## Step 2: Initialize the client
+
+**Python:**
+```python
+from mem0 import MemoryClient
+client = MemoryClient(api_key="m0-xxx")
+```
+
+**TypeScript:**
+```typescript
+import MemoryClient from 'mem0ai';
+const client = new MemoryClient({ apiKey: 'm0-xxx' });
+```
+
+For async Python, use `AsyncMemoryClient`.
+
+## Step 3: Core operations
+
+Every Mem0 integration follows the same pattern: **retrieve → generate → store**.
+
+### Add memories
+```python
+messages = [
+ {"role": "user", "content": "I'm a vegetarian and allergic to nuts."},
+ {"role": "assistant", "content": "Got it! I'll remember that."}
+]
+client.add(messages, user_id="alice")
+```
+
+### Search memories
+```python
+results = client.search("dietary preferences", user_id="alice")
+for mem in results.get("results", []):
+ print(mem["memory"])
+```
+
+### Get all memories
+```python
+all_memories = client.get_all(user_id="alice")
+```
+
+### Update a memory
+```python
+client.update("memory-uuid", text="Updated: vegetarian, nut allergy, prefers organic")
+```
+
+### Delete a memory
+```python
+client.delete("memory-uuid")
+client.delete_all(user_id="alice") # delete all for a user
+```
+
+## Common integration pattern
+
+```python
+from mem0 import MemoryClient
+from openai import OpenAI
+
+mem0 = MemoryClient()
+openai = OpenAI()
+
+def chat(user_input: str, user_id: str) -> str:
+ # 1. Retrieve relevant memories
+ memories = mem0.search(user_input, user_id=user_id)
+ context = "\n".join([m["memory"] for m in memories.get("results", [])])
+
+ # 2. Generate response with memory context
+ response = openai.chat.completions.create(
+ model="gpt-4.1-nano-2025-04-14",
+ messages=[
+ {"role": "system", "content": f"User context:\n{context}"},
+ {"role": "user", "content": user_input},
+ ]
+ )
+ reply = response.choices[0].message.content
+
+ # 3. Store interaction for future context
+ mem0.add(
+ [{"role": "user", "content": user_input}, {"role": "assistant", "content": reply}],
+ user_id=user_id
+ )
+ return reply
+```
+
+## Common edge cases
+
+- **Search returns empty:** Memories process asynchronously. Wait 2-3s after `add()` before searching. Also verify `user_id` matches exactly (case-sensitive).
+- **AND filter with user_id + agent_id returns empty:** Entities are stored separately. Use `OR` instead, or query separately.
+- **Duplicate memories:** Don't mix `infer=True` (default) and `infer=False` for the same data. Stick to one mode.
+- **Wrong import:** Always use `from mem0 import MemoryClient` (or `AsyncMemoryClient` for async). Do not use `from mem0 import Memory`.
+- **Immutable memories:** Cannot be updated or deleted once created. Use `client.history(memory_id)` to track changes over time.
+
+## Live documentation search
+
+For the latest docs beyond what's in the references, use the doc search tool:
+
+```bash
+python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --query "topic"
+python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --page "/platform/features/graph-memory"
+python ${CLAUDE_SKILL_DIR}/scripts/mem0_doc_search.py --index
+```
+
+No API key needed — searches docs.mem0.ai directly.
+
+## References
+
+Load these on demand for deeper detail:
+
+| Topic | File |
+|-------|------|
+| Quickstart (Python, TS, cURL) | [references/quickstart.md](references/quickstart.md) |
+| SDK guide (all methods, both languages) | [references/sdk-guide.md](references/sdk-guide.md) |
+| API reference (endpoints, filters, object schema) | [references/api-reference.md](references/api-reference.md) |
+| Architecture (pipeline, lifecycle, scoping, performance) | [references/architecture.md](references/architecture.md) |
+| Platform features (retrieval, graph, categories, MCP, etc.) | [references/features.md](references/features.md) |
+| Framework integrations (LangChain, CrewAI, Vercel AI, etc.) | [references/integration-patterns.md](references/integration-patterns.md) |
+| Use cases & examples (real-world patterns with code) | [references/use-cases.md](references/use-cases.md) |
diff --git a/mini_agent/skills/mem0/references/api-reference.md b/mini_agent/skills/mem0/references/api-reference.md
new file mode 100644
index 00000000..e9fd20f2
--- /dev/null
+++ b/mini_agent/skills/mem0/references/api-reference.md
@@ -0,0 +1,140 @@
+# Mem0 Platform API Reference
+
+REST API endpoints for the Mem0 Platform. Base URL: `https://api.mem0.ai`
+
+All endpoints require: `Authorization: Token `
+
+## Endpoints
+
+| Operation | Method | URL |
+|-----------|--------|-----|
+| Add Memories | `POST` | `/v1/memories/` |
+| Search Memories | `POST` | `/v2/memories/search/` |
+| Get All Memories | `POST` | `/v2/memories/` |
+| Get Single Memory | `GET` | `/v1/memories/{memory_id}/` |
+| Update Memory | `PUT` | `/v1/memories/{memory_id}/` |
+| Delete Memory | `DELETE` | `/v1/memories/{memory_id}/` |
+
+## Memory Object Structure
+
+| Field | Type | Description |
+|-------|------|-------------|
+| `id` | string (UUID) | Unique memory identifier |
+| `memory` | string | Text content of the memory |
+| `user_id` | string | Associated user |
+| `agent_id` | string (nullable) | Agent identifier |
+| `app_id` | string (nullable) | Application identifier |
+| `run_id` | string (nullable) | Run/session identifier |
+| `metadata` | object | Custom key-value pairs |
+| `categories` | array of strings | Auto-assigned category tags |
+| `immutable` | boolean | If true, prevents modification |
+| `expiration_date` | datetime (nullable) | Auto-expiry date |
+| `hash` | string | Content hash |
+| `created_at` | datetime | Creation timestamp |
+| `updated_at` | datetime | Last modification timestamp |
+
+Search results additionally include `score` (relevance metric).
+
+## Scoping Identifiers
+
+Memories can be scoped to different levels:
+
+| Scope | Parameter | Use Case |
+|-------|-----------|----------|
+| User | `user_id` | Per-user memory isolation |
+| Agent | `agent_id` | Per-agent memory partitioning |
+| Application | `app_id` | Cross-agent app-level memory |
+| Run/Session | `run_id` | Session-scoped temporary memory |
+
+**Critical:** Combining `user_id` and `agent_id` in a single AND filter yields empty results. Entities are stored separately. Use `OR` logic or separate queries.
+
+## Processing Model
+
+- Memories are processed **asynchronously by default** (`async_mode=true`)
+- Add responses return queued events (`ADD`, `UPDATE`, `DELETE`) for tracking
+- Set `async_mode=false` for synchronous processing when needed
+- Graph metadata is processed asynchronously -- use `get_all()` for complete graph data
+
+## Filter System
+
+Filters use nested JSON with a logical operator at the root:
+
+```json
+{
+ "AND": [
+ {"user_id": "alice"},
+ {"categories": {"contains": "finance"}},
+ {"created_at": {"gte": "2024-01-01"}}
+ ]
+}
+```
+
+Root must be `AND`, `OR`, or `NOT`. Simple shorthand `{"user_id": "alice"}` also works.
+
+### Supported Operators
+
+| Operator | Description |
+|----------|-------------|
+| `eq` | Equal to (default) |
+| `ne` | Not equal to |
+| `in` | Matches any value in array |
+| `gt`, `gte` | Greater than / greater than or equal |
+| `lt`, `lte` | Less than / less than or equal |
+| `contains` | Case-sensitive containment |
+| `icontains` | Case-insensitive containment |
+| `*` | Wildcard -- matches any non-null value |
+
+### Filterable Fields
+
+| Field | Valid Operators |
+|-------|-----------------|
+| `user_id`, `agent_id`, `app_id`, `run_id` | `eq`, `ne`, `in`, `*` |
+| `created_at`, `updated_at`, `timestamp` | `gt`, `gte`, `lt`, `lte`, `eq`, `ne` |
+| `categories` | `eq`, `ne`, `in`, `contains` |
+| `metadata` | `eq`, `ne`, `contains` (top-level keys only) |
+| `keywords` | `contains`, `icontains` |
+| `memory_ids` | `in` |
+
+### Filter Constraints
+
+1. **Entity scope partitioning:** `user_id` AND `agent_id` in one `AND` block yields empty results.
+2. **Metadata limitations:** Only top-level keys. Only `eq`, `contains`, `ne`. No `in` or `gt`.
+3. **Operator syntax:** Use `gte`, `lt`, `ne`. SQL-style (`>=`, `!=`) rejected.
+4. **Entity filter required for get-all:** At least one of `user_id`, `agent_id`, `app_id`, or `run_id`.
+5. **Wildcard excludes null:** `*` matches only non-null values.
+6. **Date format:** ISO 8601 (`YYYY-MM-DDTHH:MM:SSZ`). Timezone-naive defaults to UTC.
+
+## Response Formats
+
+### Add Response
+
+```json
+[
+ {
+ "id": "mem_01JF8ZS4Y0R0SPM13R5R6H32CJ",
+ "event": "ADD",
+ "data": { "memory": "The user moved to Austin in 2025." }
+ }
+]
+```
+
+Event types: `ADD`, `UPDATE`, `DELETE`. A single add can trigger multiple events.
+
+### Search Response
+
+```json
+{
+ "results": [
+ {
+ "id": "ea925981-...",
+ "memory": "Is a vegetarian and allergic to nuts.",
+ "user_id": "user123",
+ "categories": ["food", "health"],
+ "score": 0.89,
+ "created_at": "2024-07-26T10:29:36.630547-07:00"
+ }
+ ]
+}
+```
+
+With `enable_graph=true`, includes additional `relations` array with entity relationships.
diff --git a/mini_agent/skills/mem0/references/architecture.md b/mini_agent/skills/mem0/references/architecture.md
new file mode 100644
index 00000000..f0c5c1bc
--- /dev/null
+++ b/mini_agent/skills/mem0/references/architecture.md
@@ -0,0 +1,386 @@
+# Mem0 Platform Architecture
+
+How Mem0 processes, stores, and retrieves memories under the hood.
+
+## Table of Contents
+
+- [Core Concept](#core-concept)
+- [Memory Processing Pipeline](#memory-processing-pipeline)
+- [Retrieval Pipeline](#retrieval-pipeline)
+- [Memory Lifecycle](#memory-lifecycle)
+- [Memory Object Structure](#memory-object-structure)
+- [Scoping & Multi-Tenancy](#scoping--multi-tenancy)
+- [Memory Layers](#memory-layers)
+- [Performance Characteristics](#performance-characteristics)
+
+---
+
+## Core Concept
+
+Mem0 is a managed memory layer that sits between your AI application and users. Every integration follows the same 3-step loop:
+
+```
+User Input → Retrieve relevant memories → Enrich LLM prompt → Generate response → Store new memories
+```
+
+Mem0 handles the complexity of extraction, deduplication, conflict resolution, and semantic retrieval so your application only needs to call `search()` and `add()`.
+
+**Dual storage architecture:**
+- **Vector store**: Embeddings for semantic similarity search
+- **Graph store** (optional): Entity nodes and relationship edges for structured knowledge
+
+---
+
+## Memory Processing Pipeline
+
+### What happens when you call `client.add()`
+
+```
+Messages In
+ │
+ ▼
+┌─────────────────────┐
+│ 1. EXTRACTION │ LLM analyzes messages, extracts key facts
+│ (infer=True) │ If infer=False, stores raw text as-is
+└─────────┬───────────┘
+ │
+ ▼
+┌─────────────────────┐
+│ 2. CONFLICT │ Checks existing memories for duplicates
+│ RESOLUTION │ Latest truth wins (newer overrides older)
+│ │ Only runs when infer=True
+└─────────┬───────────┘
+ │
+ ▼
+┌─────────────────────┐
+│ 3. STORAGE │ Generates embeddings → vector store
+│ │ Optional: entity extraction → graph store
+│ │ Indexes metadata, categories, timestamps
+└─────────┬───────────┘
+ │
+ ▼
+ Memory Object
+ (id, memory, categories, structured_attributes)
+```
+
+### Processing modes
+
+**Async (default, `async_mode=True`):**
+- API returns immediately: `{"status": "PENDING", "event_id": "..."}`
+- Processing happens in background
+- Use webhooks for completion notifications
+- Best for: high-throughput, non-blocking workflows
+
+**Sync (`async_mode=False`):**
+- API waits for full processing
+- Returns complete memory object with `id`, `event`, `memory`
+- Best for: real-time access immediately after add
+
+### Extraction modes
+
+**Inferred (`infer=True`, default):**
+- LLM extracts structured facts from conversation
+- Conflict resolution deduplicates and resolves contradictions
+- Best for: natural conversation → memory
+
+**Raw (`infer=False`):**
+- Stores text exactly as provided, no LLM processing
+- Skips conflict resolution — same fact can be stored twice
+- Only `user` role messages are stored; `assistant` messages ignored
+- Best for: bulk imports, pre-structured data, migrations
+
+**Warning:** Don't mix `infer=True` and `infer=False` for the same data — the same fact will be stored twice.
+
+---
+
+## Retrieval Pipeline
+
+### What happens when you call `client.search()`
+
+```
+Query In
+ │
+ ▼
+┌─────────────────────┐
+│ 1. QUERY EMBEDDING │ Convert query to vector representation
+└─────────┬───────────┘
+ │
+ ▼
+┌─────────────────────┐
+│ 2. VECTOR SEARCH │ Cosine similarity across stored embeddings
+│ │ Scoped by filters (user_id, agent_id, etc.)
+└─────────┬───────────┘
+ │
+ ▼ (optional enhancements)
+┌─────────────────────┐
+│ 3a. KEYWORD SEARCH │ Expands results with specific terms (+10ms)
+│ 3b. RERANKING │ Deep semantic reordering (+150-200ms)
+│ 3c. FILTER MEMORIES │ Precision filtering, removes low-relevance (+200-300ms)
+└─────────┬───────────┘
+ │
+ ▼ (if enable_graph=True)
+┌─────────────────────┐
+│ 4. GRAPH LOOKUP │ Finds entity relationships
+│ │ Appends relations WITHOUT reranking vector results
+└─────────┬───────────┘
+ │
+ ▼
+ Results + Relations
+```
+
+### Retrieval enhancement combinations
+
+| Configuration | Latency | Best for |
+|--------------|---------|----------|
+| Base search only | ~100ms | Simple lookups |
+| `keyword_search=True` | ~110ms | Entity-heavy queries, broad coverage |
+| `rerank=True` | ~250-300ms | User-facing results, top-N precision |
+| `keyword_search=True` + `rerank=True` | ~310ms | Balanced (recommended for most apps) |
+| `rerank=True` + `filter_memories=True` | ~400-500ms | Safety-critical, production systems |
+
+### Implicit null scoping
+
+When you search with `user_id="alice"` only, Mem0 returns memories where `agent_id`, `app_id`, and `run_id` are all null. This prevents cross-scope leakage by default.
+
+To include memories with non-null fields, use explicit filters:
+```python
+# Gets memories for alice regardless of agent/app/run
+filters={"OR": [{"user_id": "alice"}]}
+```
+
+---
+
+## Memory Lifecycle
+
+```
+CREATE ──→ ACTIVE ──→ UPDATE ──→ ACTIVE
+ │ │ │
+ │ ▼ ▼
+ │ EXPIRED EXPIRED
+ │ (still stored, (still stored,
+ │ not retrieved) not retrieved)
+ │ │ │
+ ▼ ▼ ▼
+DELETE DELETE DELETE
+(permanent)
+```
+
+### Creation
+- Triggered by `client.add(messages, user_id="...")`
+- Messages processed through extraction → conflict resolution → storage
+- Gets unique UUID, `created_at` timestamp
+- Optional: custom `timestamp`, `expiration_date`, `metadata`, `immutable`
+
+### Updates
+- `client.update(memory_id, text="...")` replaces text and reindexes
+- `client.batch_update([...])` for up to 1000 memories at once
+- Immutable memories (`immutable=True`) cannot be updated — must delete and re-add
+
+### Deduplication
+- Automatic during `add()` with `infer=True`
+- Conflict resolution merges duplicate facts
+- Latest truth wins when contradictions detected
+- Prevents memory bloat from repeated information
+
+### Expiration
+- Optional `expiration_date` parameter (ISO 8601 or `YYYY-MM-DD`)
+- After expiration: memory NOT returned in searches but remains in storage
+- Useful for time-sensitive info (events, temporary preferences, session state)
+
+### Deletion
+- Single: `client.delete(memory_id)` — permanent, no recovery
+- Batch: `client.batch_delete([memory_ids])` — up to 1000
+- Bulk: `client.delete_all(user_id="alice")` — all memories for entity
+- `delete_all()` without filters raises error to prevent accidental data loss
+
+### History tracking
+- `client.history(memory_id)` returns version timeline
+- Shows all changes: `{previous_value, new_value, action, timestamps}`
+- Useful for audit trails and debugging
+
+---
+
+## Memory Object Structure
+
+```json
+{
+ "id": "uuid-string",
+ "memory": "Extracted memory text",
+ "user_id": "user-identifier",
+ "agent_id": null,
+ "app_id": null,
+ "run_id": null,
+ "metadata": { "source": "chat", "priority": "high" },
+ "categories": ["health", "preferences"],
+ "created_at": "2025-03-12T12:34:56Z",
+ "updated_at": "2025-03-12T12:34:56Z",
+ "expiration_date": null,
+ "immutable": false,
+ "structured_attributes": {
+ "day": 12, "month": 3, "year": 2025,
+ "hour": 12, "minute": 34,
+ "day_of_week": "wednesday",
+ "is_weekend": false,
+ "quarter": 1, "week_of_year": 11
+ },
+ "score": 0.85
+}
+```
+
+| Field | Type | Description |
+|-------|------|-------------|
+| `id` | UUID | Unique identifier, used for update/delete |
+| `memory` | string | Extracted or stored text content |
+| `user_id` | string | Primary entity scope |
+| `agent_id` | string | Agent scope |
+| `app_id` | string | Application scope |
+| `run_id` | string | Session/run scope |
+| `metadata` | object | Custom key-value pairs for filtering |
+| `categories` | array | Auto-assigned or custom category tags |
+| `created_at` | datetime | Creation timestamp |
+| `updated_at` | datetime | Last modification timestamp |
+| `expiration_date` | datetime | Auto-expiry date (stops retrieval, data persists) |
+| `immutable` | boolean | If true, prevents modification |
+| `structured_attributes` | object | Temporal breakdown for time-based queries |
+| `score` | float | Semantic similarity (search results only, 0-1) |
+
+---
+
+## Scoping & Multi-Tenancy
+
+Mem0 separates memories across four dimensions to prevent data mixing:
+
+| Dimension | Field | Purpose | Example |
+|-----------|-------|---------|---------|
+| User | `user_id` | Persistent persona or account | `"customer_6412"` |
+| Agent | `agent_id` | Distinct agent or tool | `"meal_planner"` |
+| App | `app_id` | Product surface or deployment | `"ios_retail_app"` |
+| Session | `run_id` | Short-lived flow or thread | `"ticket-9241"` |
+
+### Storage model
+
+Each entity combination creates separate records. A memory with `user_id="alice"` is stored separately from one with `user_id="alice"` + `agent_id="bot"`.
+
+### Critical: cross-entity queries
+
+```python
+# This returns NOTHING — user and agent memories are stored separately
+filters={"AND": [{"user_id": "alice"}, {"agent_id": "bot"}]}
+
+# Use OR to query multiple scopes
+filters={"OR": [{"user_id": "alice"}, {"agent_id": "bot"}]}
+
+# Use wildcard to include any non-null value
+filters={"AND": [{"user_id": "*"}]} # All users (excludes null)
+```
+
+### Recommended scoping patterns
+
+```python
+# User-level: persistent preferences
+client.add(messages, user_id="alice")
+
+# Session-level: temporary context
+client.add(messages, user_id="alice", run_id="session_123")
+# Clean up when done: client.delete_all(run_id="session_123")
+
+# Agent-level: agent-specific knowledge
+client.add(messages, agent_id="support_bot", app_id="helpdesk")
+
+# Multi-tenant: full isolation
+client.add(messages, user_id="alice", agent_id="bot", app_id="acme_corp", run_id="ticket_42")
+```
+
+---
+
+## Memory Layers
+
+Mem0 supports three layers of memory, from shortest to longest lived:
+
+### Conversation memory
+- In-flight messages within a single turn
+- Tool calls, chain-of-thought reasoning
+- **Lifetime:** Single response — lost after turn finishes
+- **Managed by:** Your application, not Mem0
+
+### Session memory
+- Short-lived facts for current task or channel
+- Multi-step flows (onboarding, debugging, support tickets)
+- **Lifetime:** Minutes to hours
+- **Managed by:** Mem0 via `run_id` parameter
+- Clean up with `client.delete_all(run_id="session_id")`
+
+### User memory
+- Long-lived knowledge tied to a person or account
+- Personal preferences, account state, compliance details
+- **Lifetime:** Weeks to forever
+- **Managed by:** Mem0 via `user_id` parameter
+- Persists across all sessions and interactions
+
+### How layering works in practice
+
+```python
+def chat(user_input: str, user_id: str, session_id: str) -> str:
+ # 1. Retrieve user memories (long-term preferences)
+ user_mems = mem0.search(user_input, user_id=user_id)
+
+ # 2. Retrieve session memories (current task context)
+ session_mems = mem0.search(user_input, filters={
+ "AND": [{"user_id": user_id}, {"run_id": session_id}]
+ })
+
+ # 3. Combine both layers for LLM context
+ context = format_memories(user_mems) + format_memories(session_mems)
+
+ # 4. Generate response
+ response = llm.generate(context=context, input=user_input)
+
+ # 5. Store in session scope (temporary) + user scope (persistent)
+ messages = [{"role": "user", "content": user_input}, {"role": "assistant", "content": response}]
+ mem0.add(messages, user_id=user_id, run_id=session_id)
+
+ return response
+```
+
+---
+
+## Performance Characteristics
+
+### Latency
+
+| Operation | Typical Latency |
+|-----------|----------------|
+| Base vector search | ~100ms |
+| + keyword_search | +10ms |
+| + reranking | +150-200ms |
+| + filter_memories | +200-300ms |
+| Add (async, default) | < 50ms response, background processing |
+| Add (sync) | 500ms-2s depending on extraction complexity |
+| Graph operations | Slight overhead for large stores |
+
+### Processing
+
+- **Async mode (default):** Returns immediately, processes in background
+- **Sync mode:** Waits for full extraction + storage pipeline
+- **Batch operations:** Up to 1000 memories per batch_update/batch_delete
+- **Webhooks:** Real-time notifications when async processing completes
+
+### Scoping strategy for performance
+
+- Use `user_id` for all user-facing queries (most common, fastest)
+- Add `run_id` for session isolation (narrows search space)
+- Avoid wildcard `"*"` filters on large datasets (scans all non-null records)
+- Use `top_k` to limit result count when you only need a few memories
+
+---
+
+## Comparison with Alternatives
+
+| Approach | Pros | Cons |
+|----------|------|------|
+| **Raw vector DB** | Fast, full control | No extraction, no dedup, no conflict resolution |
+| **In-memory chat history** | Zero latency | Lost on restart, no cross-session, grows unbounded |
+| **RAG over documents** | Good for static knowledge | No personalization, no memory updates |
+| **Mem0 Platform** | Managed extraction + dedup + graph + scoping | External dependency, async processing delay |
+
+Mem0 combines the best of vector search (semantic retrieval) with automatic extraction (LLM-powered), conflict resolution (deduplication), and structured scoping (multi-tenancy) — in a single managed API.
diff --git a/mini_agent/skills/mem0/references/features.md b/mini_agent/skills/mem0/references/features.md
new file mode 100644
index 00000000..fa2130f4
--- /dev/null
+++ b/mini_agent/skills/mem0/references/features.md
@@ -0,0 +1,496 @@
+# Platform Features -- Mem0 Platform
+
+Additional platform capabilities beyond core CRUD operations.
+
+## Table of Contents
+
+- [Advanced Retrieval](#advanced-retrieval)
+- [Graph Memory](#graph-memory)
+- [Custom Categories](#custom-categories)
+- [Custom Instructions](#custom-instructions)
+- [Criteria Retrieval](#criteria-retrieval)
+- [Feedback Mechanism](#feedback-mechanism)
+- [Memory Export](#memory-export)
+- [Group Chat](#group-chat)
+- [MCP Integration](#mcp-integration)
+- [Webhooks](#webhooks)
+- [Multimodal Support](#multimodal-support)
+
+## Advanced Retrieval
+
+Three enhancement options for tuning search precision, recall, and latency.
+
+### Keyword Search (`keyword_search=True`)
+
+Expands results to include memories with specific terms, names, and technical keywords.
+
+- Latency: +10ms
+- Recall: Significantly increased
+- Best for: entity-heavy queries, comprehensive coverage
+
+### Reranking (`rerank=True`)
+
+Deep semantic reordering of results — most relevant first.
+
+- Latency: +150-200ms
+- Accuracy: Significantly improved
+- Best for: user-facing results, top-N precision
+
+### Filter Memories (`filter_memories=True`)
+
+Precision filtering — removes low-relevance results entirely.
+
+- Latency: +200-300ms
+- Precision: Maximized
+- Best for: safety-critical applications, production systems
+
+### Recommended Combinations
+
+**Python:**
+```python
+# Fast & broad
+results = client.search(query, keyword_search=True, user_id="user123")
+
+# Balanced (recommended for most apps)
+results = client.search(query, keyword_search=True, rerank=True, user_id="user123")
+
+# High precision (critical apps)
+results = client.search(query, rerank=True, filter_memories=True, user_id="user123")
+```
+
+**TypeScript:**
+```typescript
+const results = await client.search(query, {
+ user_id: 'user123',
+ keyword_search: true,
+ rerank: true,
+});
+```
+
+---
+
+## Graph Memory
+
+Entity-level knowledge graph that creates relationships between memories.
+
+### How It Works
+
+1. **Extraction**: LLM analyzes conversation and identifies entities and relationships
+2. **Storage**: Embeddings go to vector store; entity nodes and edges go to graph store
+3. **Retrieval**: Vector search returns semantic matches; graph relations are appended to results
+
+Graph relations **augment** vector results without reordering them. Vector similarity always determines hit sequence.
+
+### Enabling Graph Memory
+
+**Per request:**
+```python
+client.add(messages, user_id="alice", enable_graph=True)
+client.search("query", user_id="alice", enable_graph=True)
+client.get_all(filters={"AND": [{"user_id": "alice"}]}, enable_graph=True)
+```
+
+**Project-level (default for all operations):**
+```python
+client.project.update(enable_graph=True)
+```
+
+```javascript
+await client.updateProject({ enable_graph: true });
+```
+
+### Relation Structure
+
+Each relation in the response contains:
+
+| Field | Type | Description |
+|-------|------|-------------|
+| `source` | string | Source entity name |
+| `source_type` | string | Source entity type (e.g., "Person") |
+| `relationship` | string | Relationship label (e.g., "lives_in") |
+| `target` | string | Target entity name |
+| `target_type` | string | Target entity type (e.g., "City") |
+| `score` | number | Confidence score |
+
+**Example:**
+```json
+{
+ "relations": [
+ {
+ "source": "Joseph",
+ "source_type": "Person",
+ "relationship": "lives_in",
+ "target": "Seattle",
+ "target_type": "City",
+ "score": 0.92
+ }
+ ]
+}
+```
+
+### Technical Notes
+
+- Graph Memory adds processing time; see docs for current plan availability
+- Works optimally with rich conversation histories containing entity relationships
+- Best suited for long-running assistants tracking evolving information
+- Graph writes and reads toggle independently per request
+- Multi-agent context supported via `user_id`, `agent_id`, `run_id` scoping
+- Add operations are asynchronous; graph metadata may not be immediately available
+
+---
+
+## Custom Categories
+
+Replace Mem0's default 15 labels with domain-specific categories. The system automatically tags memories to the closest matching category.
+
+### Default Categories (15)
+
+`personal_details`, `family`, `professional_details`, `sports`, `travel`, `food`, `music`, `health`, `technology`, `hobbies`, `fashion`, `entertainment`, `milestones`, `user_preferences`, `misc`
+
+### Configuration
+
+**Set project-level categories:**
+```python
+new_categories = [
+ {"lifestyle_management": "Tracks daily routines, habits, wellness activities"},
+ {"seeking_structure": "Documents goals around creating routines and systems"},
+ {"personal_information": "Basic information about the user"}
+]
+client.project.update(custom_categories=new_categories)
+```
+
+```javascript
+await client.updateProject({ custom_categories: new_categories });
+```
+
+**Retrieve active categories:**
+```python
+categories = client.project.get(fields=["custom_categories"])
+```
+
+### Key Constraint
+
+Per-request overrides (`custom_categories=...` on `client.add`) are **not supported** on the managed API. Only project-level configuration works. Workaround: store ad-hoc labels in `metadata` field.
+
+---
+
+## Custom Instructions
+
+Natural language filters that control what information Mem0 extracts when creating memories.
+
+### Set Instructions
+
+```python
+client.project.update(custom_instructions="Your guidelines here...")
+```
+
+```javascript
+await client.updateProject({ custom_instructions: "Your guidelines here..." });
+```
+
+### Template Structure
+
+1. **Task Description** -- brief extraction overview
+2. **Information Categories** -- numbered sections with specific details to capture
+3. **Processing Guidelines** -- quality and handling rules
+4. **Exclusion List** -- sensitive/irrelevant data to filter out
+
+### Domain Examples
+
+**E-commerce:** Capture product issues, preferences, service experience; exclude payment data.
+
+**Education:** Extract learning progress, student preferences, performance patterns; exclude specific grades.
+
+**Finance:** Track financial goals, life events, investment interests; exclude account numbers and SSNs.
+
+### Best Practices
+
+- Start simply, test with sample messages, iterate based on results
+- Avoid overly lengthy instructions
+- Be specific about what to include AND exclude
+
+---
+
+## Criteria Retrieval
+
+Custom attribute-based memory ranking using LLM-evaluated criteria with weights. Goes beyond semantic similarity to prioritize memories based on domain-specific signals.
+
+### Configuration
+
+```python
+# Define criteria at project level
+retrieval_criteria = [
+ {"name": "joy", "description": "Positive emotions like happiness and excitement", "weight": 3},
+ {"name": "curiosity", "description": "Inquisitiveness and desire to learn", "weight": 2},
+ {"name": "urgency", "description": "Time-sensitive or high-priority items", "weight": 4},
+]
+client.project.update(retrieval_criteria=retrieval_criteria)
+```
+
+```typescript
+await client.updateProject({
+ retrieval_criteria: [
+ { name: 'joy', description: 'Positive emotions', weight: 3 },
+ { name: 'urgency', description: 'Time-sensitive items', weight: 4 },
+ ],
+});
+```
+
+### Usage
+
+Once configured, `client.search()` automatically applies criteria ranking:
+
+```python
+# Criteria-weighted results returned automatically
+results = client.search("Why am I feeling happy?", filters={"user_id": "alice"})
+```
+
+**Best for:** Wellness assistants, tutoring platforms, productivity tools — any app needing intent-aware retrieval.
+
+---
+
+## Feedback Mechanism
+
+Provide feedback on extracted memories to improve system quality over time.
+
+### Feedback Types
+
+| Type | Meaning |
+|------|---------|
+| `POSITIVE` | Memory is useful and accurate |
+| `NEGATIVE` | Memory is not useful |
+| `VERY_NEGATIVE` | Memory is harmful or completely wrong |
+| `None` | Clear existing feedback |
+
+### Usage
+
+**Python:**
+```python
+client.feedback(
+ memory_id="mem-123",
+ feedback="POSITIVE",
+ feedback_reason="Accurately captured dietary preference"
+)
+
+# Bulk feedback
+for item in feedback_data:
+ client.feedback(**item)
+```
+
+**TypeScript:**
+```typescript
+await client.feedback('mem-123', {
+ feedback: 'POSITIVE',
+ feedback_reason: 'Accurately captured dietary preference',
+});
+```
+
+---
+
+## Memory Export
+
+Create structured exports of memories using customizable schemas with filters.
+
+### Usage
+
+```python
+import json
+
+# Define export schema
+schema = {
+ "type": "object",
+ "properties": {
+ "name": {"type": "string"},
+ "preferences": {"type": "array", "items": {"type": "string"}},
+ "health_info": {"type": "string"},
+ }
+}
+
+# Create export
+response = client.create_memory_export(
+ schema=json.dumps(schema),
+ filters={"user_id": "alice"},
+ export_instructions="Create comprehensive profile based on all memories"
+)
+
+# Retrieve export (may take a moment to process)
+result = client.get_memory_export(memory_export_id=response["id"])
+```
+
+**Best for:** Data analytics, user profile generation, compliance audits, CRM sync.
+
+---
+
+## Group Chat
+
+Process multi-participant conversations and automatically attribute memories to individual speakers.
+
+### Usage
+
+```python
+messages = [
+ {"role": "user", "name": "Alice", "content": "I think we should use React for the frontend"},
+ {"role": "user", "name": "Bob", "content": "I prefer Vue.js, it's simpler for our use case"},
+ {"role": "assistant", "content": "Both are great choices. Let me note your preferences."},
+]
+
+# Mem0 automatically attributes memories to each speaker
+response = client.add(messages, run_id="team_meeting_1")
+
+# Retrieve Alice's memories from that session
+alice_mems = client.get_all(
+ filters={"AND": [{"user_id": "alice"}, {"run_id": "team_meeting_1"}]}
+)
+```
+
+Use the `name` field in messages to identify speakers. Mem0 maps names to entity scopes automatically.
+
+---
+
+## MCP Integration
+
+Model Context Protocol integration enables AI clients (Claude Desktop, Cursor, custom agents) to manage Mem0 memory autonomously.
+
+### Configuration
+
+```json
+{
+ "mcpServers": {
+ "mem0": {
+ "command": "uvx",
+ "args": ["mem0-mcp-server"],
+ "env": {
+ "MEM0_API_KEY": "m0-your-api-key",
+ "MEM0_DEFAULT_USER_ID": "your-user-id"
+ }
+ }
+ }
+}
+```
+
+### Available MCP Tools
+
+The MCP server exposes 9 memory tools that AI agents can use autonomously:
+- Add, search, get, update, delete memories
+- Get history, list users, delete users
+- Search Mem0 documentation
+
+### How It Works
+
+1. Configure the MCP server in your AI client
+2. The agent autonomously decides when to store/retrieve memories
+3. No manual API calls needed — the agent manages memory as part of its reasoning
+
+**Best for:** Universal AI client integration — one protocol works everywhere.
+
+---
+
+## Webhooks
+
+Real-time event notifications for memory operations.
+
+### Supported Events
+
+| Event | Trigger |
+|-------|---------|
+| `memory_add` | Memory created |
+| `memory_update` | Memory modified |
+| `memory_delete` | Memory removed |
+| `memory_categorize` | Memory tagged |
+
+### Create Webhook
+
+Note: `project_id` here refers to the Mem0 dashboard project scope for webhooks — not the deprecated client init parameter.
+
+```python
+webhook = client.create_webhook(
+ url="https://your-app.com/webhook",
+ name="Memory Logger",
+ project_id="proj_123",
+ event_types=["memory_add", "memory_categorize"]
+)
+```
+
+### Manage Webhooks
+
+```python
+# Retrieve
+webhooks = client.get_webhooks(project_id="proj_123")
+
+# Update
+client.update_webhook(
+ name="Updated Logger",
+ url="https://your-app.com/new-webhook",
+ event_types=["memory_update", "memory_add"],
+ webhook_id="wh_123"
+)
+
+# Delete
+client.delete_webhook(webhook_id="wh_123")
+```
+
+### Payload Structure
+
+Memory events contain: ID, data object with memory content, event type (`ADD`/`UPDATE`/`DELETE`).
+Categorization events contain: memory ID, event type (`CATEGORIZE`), assigned category labels.
+
+---
+
+## Multimodal Support
+
+Mem0 can process images and documents alongside text.
+
+### Supported Media Types
+
+- Images: JPG, PNG
+- Documents: MDX, TXT, PDF
+
+### Image via URL
+
+```python
+image_message = {
+ "role": "user",
+ "content": {
+ "type": "image_url",
+ "image_url": {"url": "https://example.com/image.jpg"}
+ }
+}
+client.add([image_message], user_id="alice")
+```
+
+### Image via Base64
+
+```python
+import base64
+with open("photo.jpg", "rb") as f:
+ base64_image = base64.b64encode(f.read()).decode("utf-8")
+
+image_message = {
+ "role": "user",
+ "content": {
+ "type": "image_url",
+ "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}
+ }
+}
+client.add([image_message], user_id="alice")
+```
+
+### Document (MDX/TXT)
+
+```python
+doc_message = {
+ "role": "user",
+ "content": {"type": "mdx_url", "mdx_url": {"url": document_url}}
+}
+client.add([doc_message], user_id="alice")
+```
+
+### PDF Document
+
+```python
+pdf_message = {
+ "role": "user",
+ "content": {"type": "pdf_url", "pdf_url": {"url": pdf_url}}
+}
+client.add([pdf_message], user_id="alice")
+```
diff --git a/mini_agent/skills/mem0/references/integration-patterns.md b/mini_agent/skills/mem0/references/integration-patterns.md
new file mode 100644
index 00000000..e00d07ba
--- /dev/null
+++ b/mini_agent/skills/mem0/references/integration-patterns.md
@@ -0,0 +1,444 @@
+# Mem0 Integration Patterns
+
+Working code examples for integrating Mem0 Platform with popular AI frameworks.
+All examples use `MemoryClient` (Platform API key).
+
+Code examples are sourced from official Mem0 integration docs at docs.mem0.ai, simplified for quick reference.
+
+---
+
+## Common Pattern
+
+Every integration follows the same 3-step loop:
+
+1. **Retrieve** -- search relevant memories before generating a response
+2. **Generate** -- include memories as context in the LLM prompt
+3. **Store** -- save the interaction back to Mem0 for future use
+
+---
+
+## LangChain
+
+Source: [docs.mem0.ai/integrations/langchain](https://docs.mem0.ai/integrations/langchain)
+
+```python
+from langchain_openai import ChatOpenAI
+from langchain_core.messages import SystemMessage, HumanMessage
+from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
+from mem0 import MemoryClient
+
+llm = ChatOpenAI(model="gpt-4.1-nano-2025-04-14")
+mem0 = MemoryClient()
+
+prompt = ChatPromptTemplate.from_messages([
+ SystemMessage(content="You are a helpful travel agent AI. Use the provided context to personalize your responses."),
+ MessagesPlaceholder(variable_name="context"),
+ HumanMessage(content="{input}")
+])
+
+def retrieve_context(query: str, user_id: str):
+ """Retrieve relevant memories from Mem0"""
+ memories = mem0.search(query, user_id=user_id)
+ memory_list = memories['results']
+ serialized = ' '.join([m["memory"] for m in memory_list])
+ return [
+ {"role": "system", "content": f"Relevant information: {serialized}"},
+ {"role": "user", "content": query}
+ ]
+
+def chat_turn(user_input: str, user_id: str) -> str:
+ # 1. Retrieve
+ context = retrieve_context(user_input, user_id)
+ # 2. Generate
+ chain = prompt | llm
+ response = chain.invoke({"context": context, "input": user_input})
+ # 3. Store
+ mem0.add(
+ [{"role": "user", "content": user_input}, {"role": "assistant", "content": response.content}],
+ user_id=user_id
+ )
+ return response.content
+```
+
+---
+
+## CrewAI
+
+Source: [docs.mem0.ai/integrations/crewai](https://docs.mem0.ai/integrations/crewai)
+
+CrewAI has native Mem0 integration via `memory_config`:
+
+```python
+from crewai import Agent, Task, Crew, Process
+from mem0 import MemoryClient
+
+client = MemoryClient()
+
+# Store user preferences first
+messages = [
+ {"role": "user", "content": "I am more of a beach person than a mountain person."},
+ {"role": "assistant", "content": "Noted! I'll recommend beach destinations."},
+ {"role": "user", "content": "I like Airbnb more than hotels."},
+]
+client.add(messages, user_id="crew_user_1")
+
+# Create agent
+travel_agent = Agent(
+ role="Personalized Travel Planner",
+ goal="Plan personalized travel itineraries",
+ backstory="You are a seasoned travel planner.",
+ memory=True,
+)
+
+# Create task
+task = Task(
+ description="Find places to live, eat, and visit in San Francisco.",
+ expected_output="A detailed list of places to live, eat, and visit.",
+ agent=travel_agent,
+)
+
+# Setup crew with Mem0 memory
+crew = Crew(
+ agents=[travel_agent],
+ tasks=[task],
+ process=Process.sequential,
+ memory=True,
+ memory_config={
+ "provider": "mem0",
+ "config": {"user_id": "crew_user_1"},
+ }
+)
+
+result = crew.kickoff()
+```
+
+---
+
+## Vercel AI SDK
+
+Source: [docs.mem0.ai/integrations/vercel-ai-sdk](https://docs.mem0.ai/integrations/vercel-ai-sdk)
+
+Install: `npm install @mem0/vercel-ai-provider`
+
+### Basic Text Generation with Memory
+
+```typescript
+import { generateText } from "ai";
+import { createMem0 } from "@mem0/vercel-ai-provider";
+
+const mem0 = createMem0({
+ provider: "openai",
+ mem0ApiKey: "m0-xxx",
+ apiKey: "openai-api-key",
+});
+
+const { text } = await generateText({
+ model: mem0("gpt-4-turbo", { user_id: "borat" }),
+ prompt: "Suggest me a good car to buy!",
+});
+```
+
+### Streaming with Memory
+
+```typescript
+import { streamText } from "ai";
+import { createMem0 } from "@mem0/vercel-ai-provider";
+
+const mem0 = createMem0();
+
+const { textStream } = streamText({
+ model: mem0("gpt-4-turbo", { user_id: "borat" }),
+ prompt: "Suggest me a good car to buy!",
+});
+
+for await (const textPart of textStream) {
+ process.stdout.write(textPart);
+}
+```
+
+### Using Memory Utilities Standalone
+
+```typescript
+import { openai } from "@ai-sdk/openai";
+import { generateText } from "ai";
+import { retrieveMemories, addMemories } from "@mem0/vercel-ai-provider";
+
+// Retrieve memories and inject into any provider
+const prompt = "Suggest me a good car to buy.";
+const memories = await retrieveMemories(prompt, { user_id: "borat", mem0ApiKey: "m0-xxx" });
+
+const { text } = await generateText({
+ model: openai("gpt-4-turbo"),
+ prompt: prompt,
+ system: memories,
+});
+
+// Store new memories
+await addMemories(
+ [{ role: "user", content: [{ type: "text", text: "I love red cars." }] }],
+ { user_id: "borat", mem0ApiKey: "m0-xxx" }
+);
+```
+
+### Supported Providers
+
+`openai`, `anthropic`, `google`, `groq`
+
+---
+
+## OpenAI Agents SDK
+
+Source: [docs.mem0.ai/integrations/openai-agents-sdk](https://docs.mem0.ai/integrations/openai-agents-sdk)
+
+```python
+from agents import Agent, Runner, function_tool
+from mem0 import MemoryClient
+
+mem0 = MemoryClient()
+
+@function_tool
+def search_memory(query: str, user_id: str) -> str:
+ """Search through past conversations and memories"""
+ memories = mem0.search(query, user_id=user_id, top_k=3)
+ if memories and memories.get('results'):
+ return "\n".join([f"- {mem['memory']}" for mem in memories['results']])
+ return "No relevant memories found."
+
+@function_tool
+def save_memory(content: str, user_id: str) -> str:
+ """Save important information to memory"""
+ mem0.add([{"role": "user", "content": content}], user_id=user_id)
+ return "Information saved to memory."
+
+agent = Agent(
+ name="Personal Assistant",
+ instructions="""You are a helpful personal assistant with memory capabilities.
+ Use search_memory to recall past conversations.
+ Use save_memory to store important information.""",
+ tools=[search_memory, save_memory],
+ model="gpt-4.1-nano-2025-04-14"
+)
+
+result = Runner.run_sync(agent, "I love Italian food and I'm planning a trip to Rome next month")
+print(result.final_output)
+```
+
+### Multi-Agent with Handoffs
+
+```python
+from agents import Agent, Runner, function_tool
+
+travel_agent = Agent(
+ name="Travel Planner",
+ instructions="You are a travel planning specialist. Use search_memory and save_memory tools.",
+ tools=[search_memory, save_memory],
+ model="gpt-4.1-nano-2025-04-14"
+)
+
+health_agent = Agent(
+ name="Health Advisor",
+ instructions="You are a health and wellness advisor. Use search_memory and save_memory tools.",
+ tools=[search_memory, save_memory],
+ model="gpt-4.1-nano-2025-04-14"
+)
+
+triage_agent = Agent(
+ name="Personal Assistant",
+ instructions="""Route travel questions to Travel Planner, health questions to Health Advisor.""",
+ handoffs=[travel_agent, health_agent],
+ model="gpt-4.1-nano-2025-04-14"
+)
+
+result = Runner.run_sync(triage_agent, "Plan a healthy meal for my Italy trip")
+```
+
+---
+
+## Pipecat (Voice / Real-Time)
+
+Source: [docs.mem0.ai/integrations/pipecat](https://docs.mem0.ai/integrations/pipecat)
+
+```python
+from pipecat.services.mem0 import Mem0MemoryService
+
+memory = Mem0MemoryService(
+ api_key=os.getenv("MEM0_API_KEY"),
+ user_id="alice",
+ agent_id="voice_bot",
+ params={
+ "search_limit": 10,
+ "search_threshold": 0.1,
+ "system_prompt": "Here are your past memories:",
+ "add_as_system_message": True,
+ }
+)
+
+# Use in pipeline
+pipeline = Pipeline([
+ transport.input(),
+ stt,
+ user_context,
+ memory, # Memory enhances context automatically
+ llm,
+ transport.output(),
+ assistant_context
+])
+```
+
+
+
+---
+
+## LangGraph
+
+Source: [docs.mem0.ai/integrations/langgraph](https://docs.mem0.ai/integrations/langgraph)
+
+State-based agent workflows with memory persistence. Best for complex conversation flows with branching logic.
+
+```python
+from typing import Annotated, TypedDict, List
+from langgraph.graph import StateGraph, START
+from langgraph.graph.message import add_messages
+from langchain_openai import ChatOpenAI
+from mem0 import MemoryClient
+from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
+
+llm = ChatOpenAI(model="gpt-4")
+mem0 = MemoryClient()
+
+class State(TypedDict):
+ messages: Annotated[List[HumanMessage | AIMessage], add_messages]
+ mem0_user_id: str
+
+def chatbot(state: State):
+ messages = state["messages"]
+ user_id = state["mem0_user_id"]
+
+ # Retrieve relevant memories
+ memories = mem0.search(messages[-1].content, user_id=user_id)
+ context = "Relevant context:\n"
+ for memory in memories["results"]:
+ context += f"- {memory['memory']}\n"
+
+ system_message = SystemMessage(content=f"""You are a helpful support assistant.
+{context}""")
+
+ response = llm.invoke([system_message] + messages)
+
+ # Store the interaction
+ mem0.add(
+ [{"role": "user", "content": messages[-1].content},
+ {"role": "assistant", "content": response.content}],
+ user_id=user_id
+ )
+ return {"messages": [response]}
+
+graph = StateGraph(State)
+graph.add_node("chatbot", chatbot)
+graph.add_edge(START, "chatbot")
+app = graph.compile()
+
+# Usage
+result = app.invoke({
+ "messages": [HumanMessage(content="I need help with my order")],
+ "mem0_user_id": "customer_123"
+})
+```
+
+---
+
+## LlamaIndex
+
+Source: [docs.mem0.ai/integrations/llama-index](https://docs.mem0.ai/integrations/llama-index)
+
+Install: `pip install llama-index-core llama-index-memory-mem0`
+
+LlamaIndex has native Mem0 support via `Mem0Memory`. Works with ReAct and FunctionCalling agents.
+
+```python
+from llama_index.memory.mem0 import Mem0Memory
+
+context = {"user_id": "alice", "agent_id": "llama_agent_1"}
+memory = Mem0Memory.from_client(
+ context=context,
+ search_msg_limit=4, # messages from chat history used for retrieval (default: 5)
+)
+
+# Use with LlamaIndex agent
+from llama_index.core.agent import FunctionCallingAgent
+from llama_index.llms.openai import OpenAI
+
+llm = OpenAI(model="gpt-4")
+agent = FunctionCallingAgent.from_tools(
+ tools=[],
+ llm=llm,
+ memory=memory,
+ verbose=True,
+)
+
+response = agent.chat("I prefer vegetarian restaurants")
+# Memory automatically stores and retrieves context
+response = agent.chat("What kind of food do I like?")
+# Agent retrieves the vegetarian preference from Mem0
+```
+
+---
+
+## AutoGen
+
+Source: [docs.mem0.ai/integrations/autogen](https://docs.mem0.ai/integrations/autogen)
+
+Install: `pip install autogen mem0ai`
+
+Multi-agent conversational systems with memory persistence.
+
+```python
+from autogen import ConversableAgent
+from mem0 import MemoryClient
+
+memory_client = MemoryClient()
+USER_ID = "alice"
+
+agent = ConversableAgent(
+ "chatbot",
+ llm_config={"config_list": [{"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]}]},
+ code_execution_config=False,
+ human_input_mode="NEVER",
+)
+
+def get_context_aware_response(question: str) -> str:
+ # Retrieve memories for context
+ relevant_memories = memory_client.search(question, user_id=USER_ID)
+ context = "\n".join([m["memory"] for m in relevant_memories.get("results", [])])
+
+ prompt = f"""Answer considering previous interactions:
+ Previous context: {context}
+ Question: {question}"""
+
+ reply = agent.generate_reply(messages=[{"content": prompt, "role": "user"}])
+
+ # Store the new interaction
+ memory_client.add(
+ [{"role": "user", "content": question}, {"role": "assistant", "content": reply}],
+ user_id=USER_ID
+ )
+ return reply
+```
+
+---
+
+## All Supported Frameworks
+
+Beyond the examples above, Mem0 integrates with:
+
+| Framework | Type | Install |
+|-----------|------|---------|
+| [Mastra](https://docs.mem0.ai/integrations/mastra) | TS agent framework | `npm install @mastra/mem0` |
+| [ElevenLabs](https://docs.mem0.ai/integrations/elevenlabs) | Voice AI | `pip install elevenlabs mem0ai` |
+| [LiveKit](https://docs.mem0.ai/integrations/livekit) | Real-time voice/video | `pip install livekit-agents mem0ai` |
+| [Camel AI](https://docs.mem0.ai/integrations/camel-ai) | Multi-agent framework | `pip install camel-ai[all] mem0ai` |
+| [AWS Bedrock](https://docs.mem0.ai/integrations/aws-bedrock) | Cloud LLM provider | `pip install boto3 mem0ai` |
+| [Dify](https://docs.mem0.ai/integrations/dify) | Low-code AI platform | Plugin-based |
+| [Google AI ADK](https://docs.mem0.ai/integrations/google-ai-adk) | Google agent framework | `pip install google-adk mem0ai` |
+
+For the general Python pattern (no framework), see the "Common integration pattern" in [SKILL.md](../SKILL.md).
diff --git a/mini_agent/skills/mem0/references/quickstart.md b/mini_agent/skills/mem0/references/quickstart.md
new file mode 100644
index 00000000..0954f0c8
--- /dev/null
+++ b/mini_agent/skills/mem0/references/quickstart.md
@@ -0,0 +1,119 @@
+# Mem0 Platform Quickstart
+
+Get running with Mem0 in 2 minutes. No infrastructure to deploy -- just an API key.
+
+## Prerequisites
+
+- Python 3.10+ or Node.js 18+
+- A Mem0 Platform API key ([Get one here](https://app.mem0.ai/dashboard/api-keys))
+
+## Python Setup
+
+```bash
+pip install mem0ai
+export MEM0_API_KEY="m0-your-api-key"
+```
+
+```python
+from mem0 import MemoryClient
+
+client = MemoryClient(api_key="your-api-key")
+
+# Add a memory
+messages = [
+ {"role": "user", "content": "I'm a vegetarian and allergic to nuts."},
+ {"role": "assistant", "content": "Got it! I'll remember your dietary preferences."}
+]
+client.add(messages, user_id="user123")
+
+# Search memories
+results = client.search("What are my dietary restrictions?", user_id="user123")
+print(results)
+```
+
+### Async Client
+
+```python
+from mem0 import AsyncMemoryClient
+
+client = AsyncMemoryClient(api_key="your-api-key")
+
+await client.add(messages, user_id="user123")
+results = await client.search("query", user_id="user123")
+```
+
+## TypeScript / JavaScript Setup
+
+```bash
+npm install mem0ai
+export MEM0_API_KEY="m0-your-api-key"
+```
+
+```javascript
+import MemoryClient from 'mem0ai';
+
+const client = new MemoryClient({ apiKey: 'your-api-key' });
+
+// Add a memory
+const messages = [
+ {"role": "user", "content": "I'm a vegetarian and allergic to nuts."},
+ {"role": "assistant", "content": "Got it! I'll remember your dietary preferences."}
+];
+await client.add(messages, { user_id: "user123" });
+
+// Search memories
+const results = await client.search("What are my dietary restrictions?", {
+ user_id: "user123"
+});
+console.log(results);
+```
+
+## cURL
+
+```bash
+export MEM0_API_KEY="m0-your-api-key"
+
+# Add memory
+curl -X POST https://api.mem0.ai/v1/memories/ \
+ -H "Authorization: Token $MEM0_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "messages": [
+ {"role": "user", "content": "I am a vegetarian and allergic to nuts."},
+ {"role": "assistant", "content": "Got it! I will remember your dietary preferences."}
+ ],
+ "user_id": "user123"
+ }'
+
+# Search memories
+curl -X POST https://api.mem0.ai/v2/memories/search/ \
+ -H "Authorization: Token $MEM0_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "query": "What are my dietary restrictions?",
+ "filters": {"user_id": "user123"}
+ }'
+```
+
+## Sample Response
+
+```json
+{
+ "results": [
+ {
+ "id": "14e1b28a-2014-40ad-ac42-69c9ef42193d",
+ "memory": "Allergic to nuts",
+ "user_id": "user123",
+ "categories": ["health"],
+ "created_at": "2025-10-22T04:40:22.864647-07:00",
+ "score": 0.30
+ }
+ ]
+}
+```
+
+## Next Steps
+
+- [SDK Guide](sdk-guide.md) -- all methods for Python and TypeScript
+- [API Reference](api-reference.md) -- REST endpoints and memory object structure
+- [Integration Patterns](integration-patterns.md) -- LangChain, CrewAI, Vercel AI, etc.
diff --git a/mini_agent/skills/mem0/references/sdk-guide.md b/mini_agent/skills/mem0/references/sdk-guide.md
new file mode 100644
index 00000000..dc744d62
--- /dev/null
+++ b/mini_agent/skills/mem0/references/sdk-guide.md
@@ -0,0 +1,308 @@
+# Mem0 SDK Guide
+
+Complete SDK reference for Python and TypeScript. All methods use `MemoryClient` (Platform API).
+
+## Initialization
+
+**Python:**
+```python
+from mem0 import MemoryClient
+client = MemoryClient(api_key="m0-your-api-key")
+```
+
+**Python (Async):**
+```python
+from mem0 import AsyncMemoryClient
+client = AsyncMemoryClient(api_key="m0-your-api-key")
+```
+
+**TypeScript:**
+```typescript
+import MemoryClient from 'mem0ai';
+const client = new MemoryClient({ apiKey: 'm0-your-api-key' });
+```
+
+Constructor accepts `apiKey` (required) and `host` (optional, default: `https://api.mem0.ai`).
+
+---
+
+## add() -- Store Memories
+
+**Python:**
+```python
+messages = [
+ {"role": "user", "content": "I'm a vegetarian and allergic to nuts."},
+ {"role": "assistant", "content": "Got it! I'll remember that."}
+]
+client.add(messages, user_id="alice")
+
+# With metadata
+client.add(messages, user_id="alice", metadata={"source": "onboarding"})
+
+# With graph memory
+client.add(messages, user_id="alice", enable_graph=True)
+```
+
+**TypeScript:**
+```typescript
+await client.add(messages, { user_id: "alice" });
+await client.add(messages, { user_id: "alice", metadata: { source: "onboarding" } });
+await client.add(messages, { user_id: "alice", enable_graph: true });
+```
+
+### Parameters
+
+| Name | Type | Description |
+|------|------|-------------|
+| `messages` | array | `[{"role": "user", "content": "..."}]` |
+| `user_id` | string | User identifier (recommended) |
+| `agent_id` | string | Agent identifier |
+| `run_id` | string | Session identifier |
+| `metadata` | object | Custom key-value pairs |
+| `enable_graph` | boolean | Activate knowledge graph |
+| `infer` | boolean | If `false`, store raw text without inference (default: `true`) |
+| `immutable` | boolean | Prevents modification after creation |
+| `expiration_date` | string | Auto-expiry date (`YYYY-MM-DD`) |
+| `includes` | string | Preference filters for inclusion |
+| `excludes` | string | Preference filters for exclusion |
+| `async_mode` | boolean | Async processing (default: `true`). Set `false` to wait |
+
+### Advanced Add Options
+
+```python
+# Immutable -- cannot be modified or overwritten
+client.add(messages, user_id="alice", immutable=True)
+
+# Expiring memory
+client.add(messages, user_id="alice", expiration_date="2025-12-31")
+
+# Selective extraction
+client.add(messages, user_id="alice", includes="dietary preferences", excludes="payment info")
+
+# Agent + session scoping
+client.add(messages, user_id="alice", agent_id="nutrition-agent", run_id="session-456")
+
+# Synchronous processing (wait for completion)
+client.add(messages, user_id="alice", async_mode=False)
+
+# Raw text -- skip LLM inference
+client.add(
+ [{"role": "user", "content": "User prefers dark mode."}],
+ user_id="alice",
+ infer=False,
+)
+```
+
+---
+
+## search() -- Find Memories
+
+**Python:**
+```python
+results = client.search("dietary preferences?", user_id="alice")
+
+# With filters and reranking
+results = client.search(
+ query="work experience",
+ filters={"AND": [{"user_id": "alice"}, {"categories": {"contains": "professional_details"}}]},
+ top_k=5,
+ rerank=True,
+ threshold=0.5
+)
+
+# With graph relations
+results = client.search("colleagues", user_id="alice", enable_graph=True)
+
+# Keyword search
+results = client.search("vegetarian", user_id="alice", keyword_search=True)
+```
+
+**TypeScript:**
+```typescript
+const results = await client.search("dietary preferences", { user_id: "alice" });
+const results = await client.search("work experience", {
+ filters: { AND: [{ user_id: "alice" }, { categories: { contains: "professional_details" } }] },
+ top_k: 5,
+ rerank: true,
+});
+```
+
+### Parameters
+
+| Name | Type | Description |
+|------|------|-------------|
+| `query` | string | Natural language search query |
+| `user_id` | string | Filter by user |
+| `filters` | object | V2 filter object (AND/OR operators) |
+| `top_k` | number | Number of results (default: 10) |
+| `rerank` | boolean | Enable reranking for better relevance |
+| `threshold` | number | Minimum similarity score (default: 0.3) |
+| `keyword_search` | boolean | Use keyword-based search |
+| `enable_graph` | boolean | Include graph relations |
+
+### Common Filter Patterns
+
+```python
+# Single user (shorthand)
+client.search("query", user_id="alice")
+
+# OR across agents
+filters={"OR": [{"user_id": "alice"}, {"agent_id": {"in": ["travel-agent", "sports-agent"]}}]}
+
+# Category filtering (partial match)
+filters={"AND": [{"user_id": "alice"}, {"categories": {"contains": "finance"}}]}
+
+# Category filtering (exact match)
+filters={"AND": [{"user_id": "alice"}, {"categories": {"in": ["personal_information"]}}]}
+
+# Wildcard (match any non-null run)
+filters={"AND": [{"user_id": "alice"}, {"run_id": "*"}]}
+
+# Date range
+filters={"AND": [
+ {"user_id": "alice"},
+ {"created_at": {"gte": "2024-01-01T00:00:00Z"}},
+ {"created_at": {"lt": "2024-02-01T00:00:00Z"}}
+]}
+
+# Exclude categories with NOT
+filters={"AND": [{"user_id": "user_123"}, {"NOT": {"categories": {"in": ["spam", "test"]}}}]}
+
+# Multi-dimensional query
+filters={"AND": [
+ {"user_id": "user_123"},
+ {"keywords": {"icontains": "invoice"}},
+ {"categories": {"in": ["finance"]}},
+ {"created_at": {"gte": "2024-01-01T00:00:00Z"}}
+]}
+```
+
+---
+
+## get() / getAll() -- Retrieve Memories
+
+**Python:**
+```python
+# Single memory by ID
+memory = client.get(memory_id="ea925981-...")
+
+# All memories for a user
+memories = client.get_all(filters={"AND": [{"user_id": "alice"}]})
+
+# With date range
+memories = client.get_all(
+ filters={"AND": [
+ {"user_id": "alex"},
+ {"created_at": {"gte": "2024-07-01", "lte": "2024-07-31"}}
+ ]}
+)
+
+# With graph data
+memories = client.get_all(filters={"AND": [{"user_id": "alice"}]}, enable_graph=True)
+```
+
+**TypeScript:**
+```typescript
+const memory = await client.get("ea925981-...");
+const memories = await client.getAll({ filters: { AND: [{ user_id: "alice" }] } });
+```
+
+**Note:** `get_all` requires at least one of `user_id`, `agent_id`, `app_id`, or `run_id` in filters.
+
+---
+
+## update() -- Modify Memories
+
+**Python:**
+```python
+client.update(memory_id="ea925981-...", text="Updated: vegan since 2024")
+client.update(memory_id="ea925981-...", text="Updated", metadata={"verified": True})
+```
+
+**TypeScript:**
+```typescript
+await client.update("ea925981-...", { text: "Updated: vegan since 2024" });
+```
+
+Cannot update immutable memories.
+
+---
+
+## delete() / deleteAll() -- Remove Memories
+
+**Python:**
+```python
+client.delete(memory_id="ea925981-...")
+client.delete_all(user_id="alice") # Irreversible bulk delete
+```
+
+**TypeScript:**
+```typescript
+await client.delete("ea925981-...");
+await client.deleteAll({ user_id: "alice" });
+```
+
+---
+
+## history() -- Track Changes
+
+**Python:**
+```python
+history = client.history(memory_id="ea925981-...")
+# Returns: [{previous_value, new_value, action, timestamps}]
+```
+
+**TypeScript:**
+```typescript
+const history = await client.history("ea925981-...");
+```
+
+---
+
+## Batch Operations (TypeScript)
+
+```typescript
+// Batch update
+await client.batchUpdate([
+ { memoryId: "uuid-1", text: "Updated text" },
+ { memoryId: "uuid-2", text: "Another updated text" },
+]);
+
+// Batch delete
+await client.batchDelete(["uuid-1", "uuid-2", "uuid-3"]);
+```
+
+---
+
+## Additional Methods
+
+```python
+# List all users/agents/sessions with memories
+users = client.users()
+
+# Delete a user/agent entity
+client.delete_users(user_id="alice")
+
+# Submit feedback on a memory
+client.feedback(memory_id="...", feedback="POSITIVE", feedback_reason="Accurate extraction")
+
+# Export memories
+export = client.create_memory_export(filters={"AND": [{"user_id": "alice"}]})
+data = client.get_memory_export(memory_export_id=export["id"])
+```
+
+---
+
+## Common Pitfalls
+
+1. **Entity cross-filtering fails silently** -- `AND` with `user_id` + `agent_id` returns empty. Use `OR`.
+2. **SQL operators rejected** -- use `gte`, `lt`, etc. Not `>=`, `<`.
+3. **Metadata filtering is limited** -- only top-level keys with `eq`, `contains`, `ne`.
+4. **Wildcard `*` excludes null** -- only matches non-null values.
+5. **Default threshold is 0.3** -- increase for stricter matching.
+6. **Async processing** -- memories process asynchronously. Wait 2-3s after `add()` before searching.
+7. **Immutable memories** -- cannot be updated or deleted once created.
+
+## Naming Conventions
+
+Python uses `snake_case` (`user_id`, `memory_id`, `get_all`). TypeScript uses `camelCase` for methods (`getAll`, `deleteAll`, `batchUpdate`) but `snake_case` for API parameters (`user_id`, `agent_id`).
diff --git a/mini_agent/skills/mem0/references/use-cases.md b/mini_agent/skills/mem0/references/use-cases.md
new file mode 100644
index 00000000..eaca8889
--- /dev/null
+++ b/mini_agent/skills/mem0/references/use-cases.md
@@ -0,0 +1,720 @@
+# Mem0 Use Cases & Examples
+
+Real-world implementation patterns for Mem0 Platform. Each use case includes complete, runnable code in both Python and TypeScript.
+
+## Table of Contents
+
+- [Personalized AI Companion](#1-personalized-ai-companion)
+- [Customer Support with Categories](#2-customer-support-with-categories)
+- [Healthcare Coach](#3-healthcare-coach)
+- [Content Creation Workflow](#4-content-creation-workflow)
+- [Multi-Agent / Multi-Tenant](#5-multi-agent--multi-tenant)
+- [Personalized Search](#6-personalized-search)
+- [Email Intelligence](#7-email-intelligence)
+- [Common Patterns Across Use Cases](#common-patterns-across-use-cases)
+
+---
+
+## 1. Personalized AI Companion
+
+A fitness coach that remembers goals, preferences, and progress across sessions. Mem0 persists context across app restarts — no session state needed.
+
+### Implementation (Python)
+
+```python
+from mem0 import MemoryClient
+from openai import OpenAI
+
+mem0 = MemoryClient()
+openai_client = OpenAI()
+
+def chat(user_input: str, user_id: str) -> str:
+ # 1. Retrieve relevant memories
+ memories = mem0.search(user_input, user_id=user_id)
+ context = "\n".join([f"- {m['memory']}" for m in memories.get("results", [])])
+
+ # 2. Generate response with memory context
+ system_prompt = f"""You are Ray, a personal fitness coach.
+Use these known facts about the user to personalize your response:
+{context if context else 'No prior context yet.'}"""
+
+ response = openai_client.chat.completions.create(
+ model="gpt-4.1-nano-2025-04-14",
+ messages=[
+ {"role": "system", "content": system_prompt},
+ {"role": "user", "content": user_input},
+ ]
+ )
+ reply = response.choices[0].message.content
+
+ # 3. Store interaction for future context
+ mem0.add(
+ [{"role": "user", "content": user_input}, {"role": "assistant", "content": reply}],
+ user_id=user_id
+ )
+ return reply
+
+# Usage
+chat("I want to run a marathon in under 4 hours", user_id="max")
+# Next day, app restarted:
+chat("What should I focus on today?", user_id="max")
+# Ray remembers the sub-4 marathon goal
+```
+
+### Implementation (TypeScript)
+
+```typescript
+import MemoryClient from 'mem0ai';
+import OpenAI from 'openai';
+
+const mem0 = new MemoryClient({ apiKey: process.env.MEM0_API_KEY! });
+const openai = new OpenAI();
+
+async function chat(userInput: string, userId: string): Promise {
+ // 1. Retrieve relevant memories
+ const memories = await mem0.search(userInput, { user_id: userId });
+ const context = memories.results
+ ?.map((m: any) => `- ${m.memory}`)
+ .join('\n') || 'No prior context yet.';
+
+ // 2. Generate response with memory context
+ const response = await openai.chat.completions.create({
+ model: 'gpt-4.1-nano-2025-04-14',
+ messages: [
+ { role: 'system', content: `You are Ray, a personal fitness coach.\nUser context:\n${context}` },
+ { role: 'user', content: userInput },
+ ],
+ });
+ const reply = response.choices[0].message.content!;
+
+ // 3. Store interaction
+ await mem0.add(
+ [{ role: 'user', content: userInput }, { role: 'assistant', content: reply }],
+ { user_id: userId }
+ );
+ return reply;
+}
+```
+
+### Key Benefits
+
+- Context persists across app restarts — no session management needed
+- Memories are automatically deduplicated and updated
+- Works with any LLM provider (OpenAI, Anthropic, etc.)
+
+**Best for:** Fitness coaches, tutors, therapists — any assistant that needs to remember goals across sessions.
+
+---
+
+## 2. Customer Support with Categories
+
+Auto-categorize support data so teams retrieve the right facts fast. Uses custom categories for structured retrieval.
+
+### Implementation (Python)
+
+```python
+from mem0 import MemoryClient
+
+client = MemoryClient()
+
+# 1. Define categories at the project level (one-time setup)
+custom_categories = [
+ {"support_tickets": "Customer issues and resolutions"},
+ {"account_info": "Account details and preferences"},
+ {"billing": "Payment history and billing questions"},
+ {"product_feedback": "Feature requests and feedback"},
+]
+client.project.update(custom_categories=custom_categories)
+
+# 2. Store interactions — auto-classified into categories
+def log_support_interaction(user_id: str, message: str, priority: str = "normal"):
+ client.add(
+ [{"role": "user", "content": message}],
+ user_id=user_id,
+ metadata={"priority": priority, "source": "support_chat"}
+ )
+
+# 3. Retrieve by category
+def get_billing_issues(user_id: str):
+ return client.get_all(
+ filters={
+ "AND": [
+ {"user_id": user_id},
+ {"categories": {"in": ["billing"]}}
+ ]
+ }
+ )
+
+def search_support_history(user_id: str, query: str):
+ return client.search(
+ query,
+ filters={
+ "AND": [
+ {"user_id": user_id},
+ {"categories": {"contains": "support_tickets"}}
+ ]
+ },
+ top_k=5
+ )
+
+# Usage
+log_support_interaction("maria", "I was charged twice for last month's subscription", priority="high")
+log_support_interaction("maria", "The dashboard is loading slowly on mobile")
+billing = get_billing_issues("maria") # Returns only billing-related memories
+```
+
+### Implementation (TypeScript)
+
+```typescript
+import MemoryClient from 'mem0ai';
+
+const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY! });
+
+// Setup categories (one-time)
+await client.updateProject({
+ custom_categories: [
+ { support_tickets: 'Customer issues and resolutions' },
+ { billing: 'Payment history and billing questions' },
+ { product_feedback: 'Feature requests and feedback' },
+ ],
+});
+
+async function logInteraction(userId: string, message: string, priority = 'normal') {
+ await client.add(
+ [{ role: 'user', content: message }],
+ { user_id: userId, metadata: { priority, source: 'support_chat' } }
+ );
+}
+
+async function getBillingIssues(userId: string) {
+ return client.getAll({
+ filters: { AND: [{ user_id: userId }, { categories: { in: ['billing'] } }] },
+ });
+}
+```
+
+### Key Benefits
+
+- Automatic categorization — no manual tagging
+- Filter by category for structured retrieval
+- Metadata (`priority`, `source`) enables multi-dimensional queries
+
+**Best for:** Help desks, SaaS support, e-commerce — structured retrieval by category eliminates manual scanning.
+
+---
+
+## 3. Healthcare Coach
+
+Guide patients with an assistant that remembers medical history. Uses high `threshold` for confident retrieval in safety-critical contexts.
+
+### Implementation (Python)
+
+```python
+from mem0 import MemoryClient
+from openai import OpenAI
+
+mem0 = MemoryClient()
+openai_client = OpenAI()
+
+def save_patient_info(user_id: str, information: str):
+ mem0.add(
+ [{"role": "user", "content": information}],
+ user_id=user_id,
+ run_id="healthcare_session",
+ metadata={"type": "patient_information"}
+ )
+
+def consult(user_id: str, question: str) -> str:
+ # High threshold for medical accuracy
+ memories = mem0.search(question, user_id=user_id, top_k=5, threshold=0.7)
+ context = "\n".join([f"- {m['memory']}" for m in memories.get("results", [])])
+
+ response = openai_client.chat.completions.create(
+ model="gpt-4.1-nano-2025-04-14",
+ messages=[
+ {"role": "system", "content": f"You are a health coach. Patient context:\n{context}"},
+ {"role": "user", "content": question},
+ ]
+ )
+ reply = response.choices[0].message.content
+
+ # Store the interaction
+ mem0.add(
+ [{"role": "user", "content": question}, {"role": "assistant", "content": reply}],
+ user_id=user_id,
+ run_id="healthcare_session",
+ )
+ return reply
+
+# Usage
+save_patient_info("alex", "I'm allergic to penicillin and take metformin for type 2 diabetes")
+consult("alex", "Can I take amoxicillin for my sore throat?")
+# Remembers penicillin allergy — amoxicillin is a penicillin-type antibiotic
+```
+
+### Implementation (TypeScript)
+
+```typescript
+import MemoryClient from 'mem0ai';
+import OpenAI from 'openai';
+
+const mem0 = new MemoryClient({ apiKey: process.env.MEM0_API_KEY! });
+const openai = new OpenAI();
+
+async function savePatientInfo(userId: string, info: string) {
+ await mem0.add(
+ [{ role: 'user', content: info }],
+ { user_id: userId, run_id: 'healthcare_session', metadata: { type: 'patient_information' } }
+ );
+}
+
+async function consult(userId: string, question: string): Promise {
+ const memories = await mem0.search(question, {
+ user_id: userId,
+ top_k: 5,
+ threshold: 0.7,
+ });
+ const context = memories.results?.map((m: any) => `- ${m.memory}`).join('\n') || '';
+
+ const response = await openai.chat.completions.create({
+ model: 'gpt-4.1-nano-2025-04-14',
+ messages: [
+ { role: 'system', content: `You are a health coach. Patient context:\n${context}` },
+ { role: 'user', content: question },
+ ],
+ });
+ const reply = response.choices[0].message.content!;
+
+ await mem0.add(
+ [{ role: 'user', content: question }, { role: 'assistant', content: reply }],
+ { user_id: userId, run_id: 'healthcare_session' }
+ );
+ return reply;
+}
+```
+
+### Key Benefits
+
+- High threshold (0.7) ensures only confident matches for safety-critical retrieval
+- Session scoping via `run_id` groups related health interactions
+- Metadata tagging separates patient info from conversation history
+
+**Best for:** Telehealth, wellness apps, patient management — persistent health context across visits.
+
+---
+
+## 4. Content Creation Workflow
+
+Store voice guidelines once and apply them across every draft. Uses `run_id` and `metadata` to scope writing preferences per session.
+
+### Implementation (Python)
+
+```python
+from mem0 import MemoryClient
+from openai import OpenAI
+
+mem0 = MemoryClient()
+openai_client = OpenAI()
+
+def store_writing_preferences(user_id: str, preferences: str):
+ mem0.add(
+ [{"role": "user", "content": preferences}],
+ user_id=user_id,
+ run_id="editing_session",
+ metadata={"type": "preferences", "category": "writing_style"}
+ )
+
+def draft_content(user_id: str, topic: str) -> str:
+ # Retrieve writing preferences
+ prefs = mem0.search(
+ "writing style preferences",
+ filters={"AND": [{"user_id": user_id}, {"run_id": "editing_session"}]}
+ )
+ style_context = "\n".join([f"- {m['memory']}" for m in prefs.get("results", [])])
+
+ response = openai_client.chat.completions.create(
+ model="gpt-4.1-nano-2025-04-14",
+ messages=[
+ {"role": "system", "content": f"Write content matching these style preferences:\n{style_context}"},
+ {"role": "user", "content": f"Write a blog post about: {topic}"},
+ ]
+ )
+ return response.choices[0].message.content
+
+# Usage
+store_writing_preferences("writer_01", "I prefer short sentences. Active voice. No jargon. Use analogies.")
+draft_content("writer_01", "Why AI memory matters for chatbots")
+# Drafts content matching the stored voice guidelines
+```
+
+### Implementation (TypeScript)
+
+```typescript
+import MemoryClient from 'mem0ai';
+import OpenAI from 'openai';
+
+const mem0 = new MemoryClient({ apiKey: process.env.MEM0_API_KEY! });
+const openai = new OpenAI();
+
+async function storePreferences(userId: string, preferences: string) {
+ await mem0.add(
+ [{ role: 'user', content: preferences }],
+ { user_id: userId, run_id: 'editing_session', metadata: { type: 'preferences' } }
+ );
+}
+
+async function draftContent(userId: string, topic: string): Promise {
+ const prefs = await mem0.search('writing style preferences', {
+ filters: { AND: [{ user_id: userId }, { run_id: 'editing_session' }] },
+ });
+ const styleContext = prefs.results?.map((m: any) => `- ${m.memory}`).join('\n') || '';
+
+ const response = await openai.chat.completions.create({
+ model: 'gpt-4.1-nano-2025-04-14',
+ messages: [
+ { role: 'system', content: `Write content matching these preferences:\n${styleContext}` },
+ { role: 'user', content: `Write a blog post about: ${topic}` },
+ ],
+ });
+ return response.choices[0].message.content!;
+}
+```
+
+### Key Benefits
+
+- Voice consistency across all content without repeating guidelines
+- Scoped sessions let you maintain different style profiles
+- Preferences update automatically as you refine them
+
+**Best for:** Marketing teams, technical writers, agencies — consistent voice across all content.
+
+---
+
+## 5. Multi-Agent / Multi-Tenant
+
+Keep memories separate using `user_id`, `agent_id`, `app_id`, and `run_id` scoping. Critical for multi-agent workflows and multi-tenant apps.
+
+### Implementation (Python)
+
+```python
+from mem0 import MemoryClient
+
+client = MemoryClient()
+
+# Store memories scoped to user + agent + session
+def store_scoped_memory(messages: list, user_id: str, agent_id: str, run_id: str, app_id: str):
+ client.add(
+ messages,
+ user_id=user_id,
+ agent_id=agent_id,
+ run_id=run_id,
+ app_id=app_id
+ )
+
+# Query within a specific scope
+def search_user_session(query: str, user_id: str, app_id: str, run_id: str):
+ """Search memories for a specific user within a specific session."""
+ return client.search(
+ query,
+ filters={
+ "AND": [
+ {"user_id": user_id},
+ {"app_id": app_id},
+ {"run_id": run_id}
+ ]
+ }
+ )
+
+def search_agent_knowledge(query: str, agent_id: str, app_id: str):
+ """Search all memories an agent has across all users."""
+ return client.search(
+ query,
+ filters={
+ "AND": [
+ {"agent_id": agent_id},
+ {"app_id": app_id}
+ ]
+ }
+ )
+
+# Usage: Travel concierge app with multiple agents
+store_scoped_memory(
+ [{"role": "user", "content": "I'm vegetarian and prefer window seats"}],
+ user_id="traveler_cam",
+ agent_id="travel_planner",
+ run_id="tokyo-2025",
+ app_id="concierge_app"
+)
+
+# User-scoped query: "What does Cam prefer?"
+user_mems = search_user_session("dietary restrictions?", "traveler_cam", "concierge_app", "tokyo-2025")
+
+# Agent-scoped query: "What do all travelers prefer?" (across users)
+agent_mems = search_agent_knowledge("common dietary restrictions?", "travel_planner", "concierge_app")
+```
+
+### Implementation (TypeScript)
+
+```typescript
+import MemoryClient from 'mem0ai';
+
+const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY! });
+
+async function storeScopedMemory(
+ messages: Array<{ role: string; content: string }>,
+ userId: string, agentId: string, runId: string, appId: string
+) {
+ await client.add(messages, {
+ user_id: userId,
+ agent_id: agentId,
+ run_id: runId,
+ app_id: appId,
+ });
+}
+
+async function searchUserSession(query: string, userId: string, appId: string, runId: string) {
+ return client.search(query, {
+ filters: { AND: [{ user_id: userId }, { app_id: appId }, { run_id: runId }] },
+ });
+}
+
+async function searchAgentKnowledge(query: string, agentId: string, appId: string) {
+ return client.search(query, {
+ filters: { AND: [{ agent_id: agentId }, { app_id: appId }] },
+ });
+}
+```
+
+### Key Benefits
+
+- Full isolation between users, agents, sessions, and apps
+- Query at any scope level — user, agent, session, or app-wide
+- No memory leakage between tenants
+
+**Best for:** Multi-agent workflows, multi-tenant SaaS — proper isolation at every level.
+
+---
+
+## 6. Personalized Search
+
+Blend real-time search results with personal context. Uses `custom_instructions` to infer preferences from queries.
+
+### Implementation (Python)
+
+```python
+from mem0 import MemoryClient
+from openai import OpenAI
+
+mem0 = MemoryClient()
+openai_client = OpenAI()
+
+# One-time setup: configure Mem0 to infer from queries
+mem0.project.update(
+ custom_instructions="""Infer user preferences and facts from their search queries.
+Extract dietary preferences, location, interests, and purchase history."""
+)
+
+def personalized_search(user_id: str, query: str, search_results: list) -> str:
+ # Get user context from memory
+ memories = mem0.search(query, user_id=user_id, top_k=5)
+ user_context = "\n".join([f"- {m['memory']}" for m in memories.get("results", [])])
+
+ response = openai_client.chat.completions.create(
+ model="gpt-4.1-nano-2025-04-14",
+ messages=[
+ {"role": "system", "content": f"Personalize search results using user context:\n{user_context}"},
+ {"role": "user", "content": f"Query: {query}\n\nSearch results:\n{search_results}"},
+ ]
+ )
+ reply = response.choices[0].message.content
+
+ # Store the query to learn preferences over time
+ mem0.add(
+ [{"role": "user", "content": query}],
+ user_id=user_id
+ )
+ return reply
+
+# Usage
+personalized_search("user_42", "best restaurants nearby", ["Restaurant A", "Restaurant B"])
+# Over time, Mem0 learns: "user prefers vegetarian, lives in Austin"
+# Future searches are automatically personalized
+```
+
+### Implementation (TypeScript)
+
+```typescript
+import MemoryClient from 'mem0ai';
+import OpenAI from 'openai';
+
+const mem0 = new MemoryClient({ apiKey: process.env.MEM0_API_KEY! });
+const openai = new OpenAI();
+
+async function personalizedSearch(userId: string, query: string, searchResults: string[]): Promise {
+ const memories = await mem0.search(query, { user_id: userId, top_k: 5 });
+ const context = memories.results?.map((m: any) => `- ${m.memory}`).join('\n') || '';
+
+ const response = await openai.chat.completions.create({
+ model: 'gpt-4.1-nano-2025-04-14',
+ messages: [
+ { role: 'system', content: `Personalize results using user context:\n${context}` },
+ { role: 'user', content: `Query: ${query}\nResults: ${searchResults.join(', ')}` },
+ ],
+ });
+ const reply = response.choices[0].message.content!;
+
+ await mem0.add([{ role: 'user', content: query }], { user_id: userId });
+ return reply;
+}
+```
+
+### Key Benefits
+
+- Learns preferences from queries automatically via `custom_instructions`
+- Personalizes any search provider (Tavily, Google, Bing)
+- Zero manual preference setup — improves over time
+
+**Best for:** Personalized search engines, recommendation systems — search results tailored to individual users.
+
+---
+
+## 7. Email Intelligence
+
+Capture, categorize, and recall inbox threads using persistent memories with rich metadata.
+
+### Implementation (Python)
+
+```python
+from mem0 import MemoryClient
+
+client = MemoryClient()
+
+def store_email(user_id: str, sender: str, subject: str, body: str, date: str):
+ client.add(
+ [{"role": "user", "content": f"Email from {sender}: {subject}\n\n{body}"}],
+ user_id=user_id,
+ metadata={"email_type": "incoming", "sender": sender, "subject": subject, "date": date}
+ )
+
+def search_emails(user_id: str, query: str):
+ return client.search(
+ query,
+ filters={"AND": [{"user_id": user_id}, {"categories": {"contains": "email"}}]},
+ top_k=10
+ )
+
+def get_emails_from_sender(user_id: str, sender: str):
+ return client.get_all(
+ filters={
+ "AND": [
+ {"user_id": user_id},
+ {"metadata": {"contains": sender}}
+ ]
+ }
+ )
+
+# Usage
+store_email("alice", "bob@acme.com", "Q3 Budget Review", "Attached is the Q3 budget...", "2025-01-15")
+store_email("alice", "carol@acme.com", "Sprint Planning", "Here are the priorities...", "2025-01-16")
+
+results = search_emails("alice", "budget discussions")
+sender_emails = get_emails_from_sender("alice", "bob@acme.com")
+```
+
+### Implementation (TypeScript)
+
+```typescript
+import MemoryClient from 'mem0ai';
+
+const client = new MemoryClient({ apiKey: process.env.MEM0_API_KEY! });
+
+async function storeEmail(userId: string, sender: string, subject: string, body: string, date: string) {
+ await client.add(
+ [{ role: 'user', content: `Email from ${sender}: ${subject}\n\n${body}` }],
+ { user_id: userId, metadata: { email_type: 'incoming', sender, subject, date } }
+ );
+}
+
+async function searchEmails(userId: string, query: string) {
+ return client.search(query, {
+ filters: { AND: [{ user_id: userId }, { categories: { contains: 'email' } }] },
+ top_k: 10,
+ });
+}
+```
+
+### Key Benefits
+
+- Rich metadata enables multi-dimensional queries (sender, date, subject)
+- Category filtering separates emails from other memory types
+- Semantic search across all email content
+
+**Best for:** Inbox management, email automation — searchable email memories with metadata filtering.
+
+---
+
+## Common Patterns Across Use Cases
+
+### Pattern 1: Retrieve → Generate → Store
+
+Every use case follows the same 3-step loop:
+
+```python
+# 1. Retrieve relevant context
+memories = mem0.search(user_input, user_id=user_id)
+context = "\n".join([m["memory"] for m in memories.get("results", [])])
+
+# 2. Generate with context
+response = llm.generate(system_prompt=f"Context:\n{context}", user_input=user_input)
+
+# 3. Store the interaction
+mem0.add(
+ [{"role": "user", "content": user_input}, {"role": "assistant", "content": response}],
+ user_id=user_id
+)
+```
+
+### Pattern 2: Scope with Entity Identifiers
+
+Use `user_id`, `agent_id`, `app_id`, and `run_id` to isolate memories:
+
+```python
+# User-level: personal preferences
+client.add(messages, user_id="alice")
+
+# Session-level: conversation within one session
+client.add(messages, user_id="alice", run_id="session_123")
+
+# Agent-level: agent-specific knowledge
+client.add(messages, agent_id="support_bot", app_id="helpdesk")
+```
+
+### Pattern 3: Rich Metadata for Filtering
+
+Attach structured metadata for multi-dimensional queries:
+
+```python
+# Store with metadata
+client.add(messages, user_id="alice", metadata={"priority": "high", "source": "phone_call"})
+
+# Filter by category + metadata
+client.search("billing issues", filters={
+ "AND": [{"user_id": "alice"}, {"categories": {"contains": "billing"}}]
+})
+```
+
+### Pattern 4: Custom Instructions for Domain-Specific Extraction
+
+Control what Mem0 extracts from conversations:
+
+```python
+client.project.update(
+ custom_instructions="Extract medical conditions, medications, and allergies. Exclude billing info."
+)
+```
+
+---
+
+## More Examples
+
+For 30+ cookbooks with complete working code: [docs.mem0.ai/cookbooks](https://docs.mem0.ai/cookbooks)
diff --git a/mini_agent/skills/mem0/scripts/mem0_doc_search.py b/mini_agent/skills/mem0/scripts/mem0_doc_search.py
new file mode 100644
index 00000000..1ff8de1c
--- /dev/null
+++ b/mini_agent/skills/mem0/scripts/mem0_doc_search.py
@@ -0,0 +1,224 @@
+#!/usr/bin/env python3
+"""
+Mem0 Documentation Search Agent (Mintlify-based)
+On-demand search tool for querying Mem0 documentation without storing content locally.
+
+This tool leverages Mintlify's documentation structure to perform just-in-time
+retrieval of technical information from docs.mem0.ai.
+
+Usage:
+ python mem0_doc_search.py --query "how to add graph memory"
+ python mem0_doc_search.py --query "filter syntax for categories"
+ python mem0_doc_search.py --page "/platform/features/graph-memory"
+ python mem0_doc_search.py --index
+ python mem0_doc_search.py --query "webhook events" --section platform
+
+Purpose:
+ - Avoid bloating local context with full documentation
+ - Enable just-in-time retrieval of technical details
+ - Query specific documentation pages on demand
+ - Search across the full Mem0 documentation site
+"""
+
+import argparse
+import json
+import sys
+import urllib.error
+import urllib.parse
+import urllib.request
+
+DOCS_BASE = "https://docs.mem0.ai"
+SEARCH_ENDPOINT = f"{DOCS_BASE}/api/search"
+LLMS_INDEX = f"{DOCS_BASE}/llms.txt"
+
+# Known documentation sections for targeted retrieval
+SECTION_MAP = {
+ "platform": [
+ "/platform/overview",
+ "/platform/quickstart",
+ "/platform/features",
+ "/platform/features/graph-memory",
+ "/platform/features/selective-memory",
+ "/platform/features/custom-categories",
+ "/platform/features/v2-memory-filters",
+ "/platform/features/async-client",
+ "/platform/features/webhooks",
+ "/platform/features/multimodal-support",
+ ],
+ "api": [
+ "/api-reference/memory/add-memories",
+ "/api-reference/memory/v2-search-memories",
+ "/api-reference/memory/v2-get-memories",
+ "/api-reference/memory/get-memory",
+ "/api-reference/memory/update-memory",
+ "/api-reference/memory/delete-memory",
+ ],
+ "open-source": [
+ "/open-source/overview",
+ "/open-source/python-quickstart",
+ "/open-source/node-quickstart",
+ "/open-source/features",
+ "/open-source/features/graph-memory",
+ "/open-source/features/rest-api",
+ "/open-source/configure-components",
+ ],
+ "openmemory": [
+ "/openmemory/overview",
+ "/openmemory/quickstart",
+ ],
+ "sdks": [
+ "/sdks/python",
+ "/sdks/js",
+ ],
+ "integrations": [
+ "/integrations",
+ ],
+}
+
+
+def fetch_url(url: str) -> str:
+ """Fetch content from a URL."""
+ req = urllib.request.Request(url, headers={"User-Agent": "Mem0DocSearchAgent/1.0"})
+ try:
+ with urllib.request.urlopen(req, timeout=15) as resp:
+ return resp.read().decode("utf-8")
+ except urllib.error.HTTPError as e:
+ return f"HTTP Error {e.code}: {e.reason}"
+ except urllib.error.URLError as e:
+ return f"URL Error: {e.reason}"
+
+
+def search_docs(query: str, section: str | None = None) -> dict:
+ """
+ Search Mem0 documentation using Mintlify's search API.
+ Falls back to the llms.txt index for keyword matching if the API is unavailable.
+ """
+ # Try Mintlify search API first
+ params = urllib.parse.urlencode({"query": query})
+ search_url = f"{SEARCH_ENDPOINT}?{params}"
+
+ try:
+ result = fetch_url(search_url)
+ data = json.loads(result)
+ if isinstance(data, dict) and data.get("results"):
+ results = data["results"]
+ if section and section in SECTION_MAP:
+ section_paths = SECTION_MAP[section]
+ results = [r for r in results if any(r.get("url", "").startswith(p) for p in section_paths)]
+ return {"source": "mintlify_search", "results": results}
+ except (json.JSONDecodeError, Exception):
+ pass
+
+ # Fallback: search llms.txt index for matching URLs
+ index_content = fetch_url(LLMS_INDEX)
+ query_lower = query.lower()
+ matching_urls = []
+
+ for line in index_content.splitlines():
+ line = line.strip()
+ if not line or line.startswith("#"):
+ continue
+ if query_lower in line.lower():
+ matching_urls.append(line)
+
+ if section and section in SECTION_MAP:
+ section_paths = SECTION_MAP[section]
+ matching_urls = [u for u in matching_urls if any(p in u for p in section_paths)]
+
+ return {
+ "source": "llms_txt_index",
+ "query": query,
+ "matching_urls": matching_urls[:20],
+ "suggestion": "Fetch specific URLs for detailed content",
+ }
+
+
+def fetch_page(page_path: str) -> dict:
+ """Fetch a specific documentation page."""
+ url = f"{DOCS_BASE}{page_path}" if page_path.startswith("/") else page_path
+ content = fetch_url(url)
+ return {"url": url, "content": content[:10000], "truncated": len(content) > 10000}
+
+
+def get_index() -> dict:
+ """Fetch the full documentation index from llms.txt."""
+ content = fetch_url(LLMS_INDEX)
+ urls = [line.strip() for line in content.splitlines() if line.strip() and not line.startswith("#")]
+ return {"total_pages": len(urls), "urls": urls, "sections": list(SECTION_MAP.keys())}
+
+
+def list_section(section: str) -> dict:
+ """List all known pages in a documentation section."""
+ if section not in SECTION_MAP:
+ return {"error": f"Unknown section: {section}", "available": list(SECTION_MAP.keys())}
+ return {
+ "section": section,
+ "pages": [f"{DOCS_BASE}{p}" for p in SECTION_MAP[section]],
+ }
+
+
+def main():
+ parser = argparse.ArgumentParser(description="Search Mem0 documentation on demand")
+ parser.add_argument("--query", help="Search query for documentation")
+ parser.add_argument("--page", help="Fetch a specific page path (e.g., /platform/features/graph-memory)")
+ parser.add_argument("--index", action="store_true", help="Show full documentation index")
+ parser.add_argument("--section", help="Filter by section or list section pages")
+ parser.add_argument("--json", action="store_true", help="Output as JSON")
+
+ args = parser.parse_args()
+
+ if args.index:
+ result = get_index()
+ elif args.section and not args.query:
+ result = list_section(args.section)
+ elif args.page:
+ result = fetch_page(args.page)
+ elif args.query:
+ result = search_docs(args.query, section=args.section)
+ else:
+ parser.print_help()
+ sys.exit(1)
+
+ if args.json:
+ print(json.dumps(result, indent=2))
+ else:
+ if isinstance(result, dict):
+ if "results" in result:
+ print(f"Source: {result.get('source', 'unknown')}")
+ for r in result["results"]:
+ print(f" - {r.get('title', 'N/A')}: {r.get('url', 'N/A')}")
+ if r.get("description"):
+ print(f" {r['description'][:200]}")
+ elif "matching_urls" in result:
+ print(f"Source: {result['source']}")
+ print(f"Query: {result['query']}")
+ for url in result["matching_urls"]:
+ print(f" - {url}")
+ if result.get("suggestion"):
+ print(f"\n{result['suggestion']}")
+ elif "urls" in result:
+ print(f"Total documentation pages: {result['total_pages']}")
+ print(f"Sections: {', '.join(result['sections'])}")
+ for url in result["urls"][:30]:
+ print(f" - {url}")
+ if result["total_pages"] > 30:
+ print(f" ... and {result['total_pages'] - 30} more")
+ elif "pages" in result:
+ print(f"Section: {result['section']}")
+ for page in result["pages"]:
+ print(f" - {page}")
+ elif "content" in result:
+ print(f"URL: {result['url']}")
+ if result.get("truncated"):
+ print("[Content truncated to 10000 chars]")
+ print(result["content"])
+ elif "error" in result:
+ print(f"Error: {result['error']}")
+ if result.get("available"):
+ print(f"Available sections: {', '.join(result['available'])}")
+ else:
+ print(json.dumps(result, indent=2))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/mini_agent/skills/schedule/SKILL.md b/mini_agent/skills/schedule/SKILL.md
new file mode 100644
index 00000000..45d0a2ef
--- /dev/null
+++ b/mini_agent/skills/schedule/SKILL.md
@@ -0,0 +1,41 @@
+---
+name: schedule
+description: "Create a scheduled task that can be run on demand or automatically on an interval."
+---
+
+You are creating a reusable shortcut from the current session. Follow these steps:
+
+## 1. Analyze the session
+
+Review the session history to identify the core task the user performed or requested. Distill it into a single, repeatable objective.
+
+## 2. Draft a prompt
+
+The prompt will be used for future autonomous runs — it must be entirely self-contained. Future runs will NOT have access to this session, so never reference "the current conversation," "the above," or any ephemeral context.
+
+Include in the description:
+- A clear objective statement (what to accomplish)
+- Specific steps to execute
+- Any relevant file paths, URLs, repositories, or tool names
+- Expected output or success criteria
+- Any constraints or preferences the user expressed
+
+Write the description in second-person imperative ("Check the inbox…", "Run the test suite…"). Keep it concise but complete enough that another Claude session could execute it cold.
+
+## 3. Choose a taskName
+
+Pick a short, descriptive name in kebab-case (e.g. "daily-inbox-summary", "weekly-dep-audit", "format-pr-description").
+
+## 4. Determine scheduling
+
+Pick one:
+- **Recurring** ("every morning", "weekdays at 5pm", "hourly") → `cronExpression`
+- **One-time with a specific moment** ("remind me in 5 minutes", "tomorrow at 3pm", "next Friday") → `fireAt` ISO timestamp
+- **Ad-hoc** (no automatic run; user will trigger manually) → omit both
+- **Ambiguous** → propose a schedule and ask the user to confirm before proceeding
+
+**cronExpression:** Evaluated in the user's LOCAL timezone, not UTC. Use local times directly — e.g. "8am every Friday" → `0 8 * * 5`.
+
+**fireAt:** Compute the exact moment and emit a full ISO 8601 string with timezone offset, e.g. `2026-03-05T14:30:00-08:00`. Never use cron for one-time tasks — cron has no one-shot semantics.
+
+Finally, call the "create_scheduled_task" tool.
\ No newline at end of file
diff --git a/mini_agent/skills/setup-cowork/SKILL.md b/mini_agent/skills/setup-cowork/SKILL.md
new file mode 100644
index 00000000..3f42ad74
--- /dev/null
+++ b/mini_agent/skills/setup-cowork/SKILL.md
@@ -0,0 +1,47 @@
+---
+name: setup-cowork
+description: "Guided Cowork setup — install a matching plugin, try a skill, connect tools."
+---
+
+# Setup Cowork
+
+Help the user get Cowork configured for their work. A few steps — role, plugin, try a skill, connectors.
+
+## Step 1 — Role
+
+Your initial message should frame what Cowork is: it reads your email, searches your docs, drafts reports, and keeps going while you're away, etc. Educate the user on *Skills*, reusable workflows you run with `/name`; *Plugins* bundle skills for a domain / use case; *Connectors* wire in your tools." Two or three sentences. Hit the beats: multi-step and autonomous, uses your real tools, skills/plugins/connectors defined.
+
+Next, ask the user for their role. Something like: "Let's get you set up — takes a few minutes. What kind of work do you do?" Then call the tool to show the onboarding role picker, which will display some roles to the user: do not list the roles yourself.
+
+## Step 2 — Install a plugin
+
+The role picker tool result will contain their selection. If it was dismissed, it means they didn't select a role: just suggest the productivity plugin and move on.
+
+Search the plugin marketplace for their role — include already-installed plugins in the search so if they already have the right one, you showcase it rather than suggesting something worse. Pick the best match, then suggest that plugin to the user. End your turn here — they'll click Add and see its skills.
+
+If the search comes up empty, fall back to the productivity plugin.
+
+## Step 3 — Try a skill
+
+After the plugin is suggested: explain what just happened. Something like: "That plugin bundles skills for [their role] work — reusable workflows you trigger with `/name`."
+
+Wait for them to try one or type something.
+
+If they invoke a skill (you'll see a /name message), help them with it briefly — but remember you're still running setup-cowork. Once that's done or they pause, bring it back to setup: "Nice — that's how skills work. One more thing to set up: connectors.", and immediately start suggesting connectors, i.e. step 4.
+
+## Step 4 — Connectors
+
+Once they've tried a skill (or typed something to move on): explain connectors briefly — "Connectors plug in your actual tools so skills have real context — your email, calendar, docs."
+
+First, search the connector registry using their role as the keyword. Then render some connector suggestions with the top 2-3 UUIDs from the search results — pass the role as the keyword so the card header says "For your [role]".
+
+## Step 5 — Wrap
+
+Once they've connected something, or waved it off: close short — "You're set. Start a new task from the sidebar anytime, or type `/` to see your skills."
+
+## Ground rules
+
+- One step at a time.
+- Skips are fine. If they pass on a step, move on.
+- Keep each message short. Two or three sentences plus the widget, not a wall.
+- The user trying a skill mid-flow is expected. Help with it, then return to where you left off. Don't let a skill invocation end the setup.
diff --git a/mini_agent/skills/web-artifacts-builder/LICENSE.txt b/mini_agent/skills/web-artifacts-builder/LICENSE.txt
new file mode 100644
index 00000000..7a4a3ea2
--- /dev/null
+++ b/mini_agent/skills/web-artifacts-builder/LICENSE.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
\ No newline at end of file
diff --git a/mini_agent/skills/web-artifacts-builder/SKILL.md b/mini_agent/skills/web-artifacts-builder/SKILL.md
new file mode 100644
index 00000000..8b39b19f
--- /dev/null
+++ b/mini_agent/skills/web-artifacts-builder/SKILL.md
@@ -0,0 +1,74 @@
+---
+name: web-artifacts-builder
+description: Suite of tools for creating elaborate, multi-component claude.ai HTML artifacts using modern frontend web technologies (React, Tailwind CSS, shadcn/ui). Use for complex artifacts requiring state management, routing, or shadcn/ui components - not for simple single-file HTML/JSX artifacts.
+license: Complete terms in LICENSE.txt
+---
+
+# Web Artifacts Builder
+
+To build powerful frontend claude.ai artifacts, follow these steps:
+1. Initialize the frontend repo using `scripts/init-artifact.sh`
+2. Develop your artifact by editing the generated code
+3. Bundle all code into a single HTML file using `scripts/bundle-artifact.sh`
+4. Display artifact to user
+5. (Optional) Test the artifact
+
+**Stack**: React 18 + TypeScript + Vite + Parcel (bundling) + Tailwind CSS + shadcn/ui
+
+## Design & Style Guidelines
+
+VERY IMPORTANT: To avoid what is often referred to as "AI slop", avoid using excessive centered layouts, purple gradients, uniform rounded corners, and Inter font.
+
+## Quick Start
+
+### Step 1: Initialize Project
+
+Run the initialization script to create a new React project:
+```bash
+bash scripts/init-artifact.sh
+cd
+```
+
+This creates a fully configured project with:
+- ✅ React + TypeScript (via Vite)
+- ✅ Tailwind CSS 3.4.1 with shadcn/ui theming system
+- ✅ Path aliases (`@/`) configured
+- ✅ 40+ shadcn/ui components pre-installed
+- ✅ All Radix UI dependencies included
+- ✅ Parcel configured for bundling (via .parcelrc)
+- ✅ Node 18+ compatibility (auto-detects and pins Vite version)
+
+### Step 2: Develop Your Artifact
+
+To build the artifact, edit the generated files. See **Common Development Tasks** below for guidance.
+
+### Step 3: Bundle to Single HTML File
+
+To bundle the React app into a single HTML artifact:
+```bash
+bash scripts/bundle-artifact.sh
+```
+
+This creates `bundle.html` - a self-contained artifact with all JavaScript, CSS, and dependencies inlined. This file can be directly shared in Claude conversations as an artifact.
+
+**Requirements**: Your project must have an `index.html` in the root directory.
+
+**What the script does**:
+- Installs bundling dependencies (parcel, @parcel/config-default, parcel-resolver-tspaths, html-inline)
+- Creates `.parcelrc` config with path alias support
+- Builds with Parcel (no source maps)
+- Inlines all assets into single HTML using html-inline
+
+### Step 4: Share Artifact with User
+
+Finally, share the bundled HTML file in conversation with the user so they can view it as an artifact.
+
+### Step 5: Testing/Visualizing the Artifact (Optional)
+
+Note: This is a completely optional step. Only perform if necessary or requested.
+
+To test/visualize the artifact, use available tools (including other Skills or built-in tools like Playwright or Puppeteer). In general, avoid testing the artifact upfront as it adds latency between the request and when the finished artifact can be seen. Test later, after presenting the artifact, if requested or if issues arise.
+
+## Reference
+
+- **shadcn/ui components**: https://ui.shadcn.com/docs/components
\ No newline at end of file
diff --git a/mini_agent/skills/web-artifacts-builder/scripts/bundle-artifact.sh b/mini_agent/skills/web-artifacts-builder/scripts/bundle-artifact.sh
new file mode 100644
index 00000000..c13d229e
--- /dev/null
+++ b/mini_agent/skills/web-artifacts-builder/scripts/bundle-artifact.sh
@@ -0,0 +1,54 @@
+#!/bin/bash
+set -e
+
+echo "📦 Bundling React app to single HTML artifact..."
+
+# Check if we're in a project directory
+if [ ! -f "package.json" ]; then
+ echo "❌ Error: No package.json found. Run this script from your project root."
+ exit 1
+fi
+
+# Check if index.html exists
+if [ ! -f "index.html" ]; then
+ echo "❌ Error: No index.html found in project root."
+ echo " This script requires an index.html entry point."
+ exit 1
+fi
+
+# Install bundling dependencies
+echo "📦 Installing bundling dependencies..."
+pnpm add -D parcel @parcel/config-default parcel-resolver-tspaths html-inline
+
+# Create Parcel config with tspaths resolver
+if [ ! -f ".parcelrc" ]; then
+ echo "🔧 Creating Parcel configuration with path alias support..."
+ cat > .parcelrc << 'EOF'
+{
+ "extends": "@parcel/config-default",
+ "resolvers": ["parcel-resolver-tspaths", "..."]
+}
+EOF
+fi
+
+# Clean previous build
+echo "🧹 Cleaning previous build..."
+rm -rf dist bundle.html
+
+# Build with Parcel
+echo "🔨 Building with Parcel..."
+pnpm exec parcel build index.html --dist-dir dist --no-source-maps
+
+# Inline everything into single HTML
+echo "🎯 Inlining all assets into single HTML file..."
+pnpm exec html-inline dist/index.html > bundle.html
+
+# Get file size
+FILE_SIZE=$(du -h bundle.html | cut -f1)
+
+echo ""
+echo "✅ Bundle complete!"
+echo "📄 Output: bundle.html ($FILE_SIZE)"
+echo ""
+echo "You can now use this single HTML file as an artifact in Claude conversations."
+echo "To test locally: open bundle.html in your browser"
\ No newline at end of file
diff --git a/mini_agent/skills/web-artifacts-builder/scripts/init-artifact.sh b/mini_agent/skills/web-artifacts-builder/scripts/init-artifact.sh
new file mode 100644
index 00000000..7d1022d8
--- /dev/null
+++ b/mini_agent/skills/web-artifacts-builder/scripts/init-artifact.sh
@@ -0,0 +1,322 @@
+#!/bin/bash
+
+# Exit on error
+set -e
+
+# Detect Node version
+NODE_VERSION=$(node -v | cut -d'v' -f2 | cut -d'.' -f1)
+
+echo "🔍 Detected Node.js version: $NODE_VERSION"
+
+if [ "$NODE_VERSION" -lt 18 ]; then
+ echo "❌ Error: Node.js 18 or higher is required"
+ echo " Current version: $(node -v)"
+ exit 1
+fi
+
+# Set Vite version based on Node version
+if [ "$NODE_VERSION" -ge 20 ]; then
+ VITE_VERSION="latest"
+ echo "✅ Using Vite latest (Node 20+)"
+else
+ VITE_VERSION="5.4.11"
+ echo "✅ Using Vite $VITE_VERSION (Node 18 compatible)"
+fi
+
+# Detect OS and set sed syntax
+if [[ "$OSTYPE" == "darwin"* ]]; then
+ SED_INPLACE="sed -i ''"
+else
+ SED_INPLACE="sed -i"
+fi
+
+# Check if pnpm is installed
+if ! command -v pnpm &> /dev/null; then
+ echo "📦 pnpm not found. Installing pnpm..."
+ npm install -g pnpm
+fi
+
+# Check if project name is provided
+if [ -z "$1" ]; then
+ echo "❌ Usage: ./create-react-shadcn-complete.sh "
+ exit 1
+fi
+
+PROJECT_NAME="$1"
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+COMPONENTS_TARBALL="$SCRIPT_DIR/shadcn-components.tar.gz"
+
+# Check if components tarball exists
+if [ ! -f "$COMPONENTS_TARBALL" ]; then
+ echo "❌ Error: shadcn-components.tar.gz not found in script directory"
+ echo " Expected location: $COMPONENTS_TARBALL"
+ exit 1
+fi
+
+echo "🚀 Creating new React + Vite project: $PROJECT_NAME"
+
+# Create new Vite project (always use latest create-vite, pin vite version later)
+pnpm create vite "$PROJECT_NAME" --template react-ts
+
+# Navigate into project directory
+cd "$PROJECT_NAME"
+
+echo "🧹 Cleaning up Vite template..."
+$SED_INPLACE '/.*<\/title>/'"$PROJECT_NAME"'<\/title>/' index.html
+
+echo "📦 Installing base dependencies..."
+pnpm install
+
+# Pin Vite version for Node 18
+if [ "$NODE_VERSION" -lt 20 ]; then
+ echo "📌 Pinning Vite to $VITE_VERSION for Node 18 compatibility..."
+ pnpm add -D vite@$VITE_VERSION
+fi
+
+echo "📦 Installing Tailwind CSS and dependencies..."
+pnpm install -D tailwindcss@3.4.1 postcss autoprefixer @types/node tailwindcss-animate
+pnpm install class-variance-authority clsx tailwind-merge lucide-react next-themes
+
+echo "⚙️ Creating Tailwind and PostCSS configuration..."
+cat > postcss.config.js << 'EOF'
+export default {
+ plugins: {
+ tailwindcss: {},
+ autoprefixer: {},
+ },
+}
+EOF
+
+echo "📝 Configuring Tailwind with shadcn theme..."
+cat > tailwind.config.js << 'EOF'
+/** @type {import('tailwindcss').Config} */
+module.exports = {
+ darkMode: ["class"],
+ content: [
+ "./index.html",
+ "./src/**/*.{js,ts,jsx,tsx}",
+ ],
+ theme: {
+ extend: {
+ colors: {
+ border: "hsl(var(--border))",
+ input: "hsl(var(--input))",
+ ring: "hsl(var(--ring))",
+ background: "hsl(var(--background))",
+ foreground: "hsl(var(--foreground))",
+ primary: {
+ DEFAULT: "hsl(var(--primary))",
+ foreground: "hsl(var(--primary-foreground))",
+ },
+ secondary: {
+ DEFAULT: "hsl(var(--secondary))",
+ foreground: "hsl(var(--secondary-foreground))",
+ },
+ destructive: {
+ DEFAULT: "hsl(var(--destructive))",
+ foreground: "hsl(var(--destructive-foreground))",
+ },
+ muted: {
+ DEFAULT: "hsl(var(--muted))",
+ foreground: "hsl(var(--muted-foreground))",
+ },
+ accent: {
+ DEFAULT: "hsl(var(--accent))",
+ foreground: "hsl(var(--accent-foreground))",
+ },
+ popover: {
+ DEFAULT: "hsl(var(--popover))",
+ foreground: "hsl(var(--popover-foreground))",
+ },
+ card: {
+ DEFAULT: "hsl(var(--card))",
+ foreground: "hsl(var(--card-foreground))",
+ },
+ },
+ borderRadius: {
+ lg: "var(--radius)",
+ md: "calc(var(--radius) - 2px)",
+ sm: "calc(var(--radius) - 4px)",
+ },
+ keyframes: {
+ "accordion-down": {
+ from: { height: "0" },
+ to: { height: "var(--radix-accordion-content-height)" },
+ },
+ "accordion-up": {
+ from: { height: "var(--radix-accordion-content-height)" },
+ to: { height: "0" },
+ },
+ },
+ animation: {
+ "accordion-down": "accordion-down 0.2s ease-out",
+ "accordion-up": "accordion-up 0.2s ease-out",
+ },
+ },
+ },
+ plugins: [require("tailwindcss-animate")],
+}
+EOF
+
+# Add Tailwind directives and CSS variables to index.css
+echo "🎨 Adding Tailwind directives and CSS variables..."
+cat > src/index.css << 'EOF'
+@tailwind base;
+@tailwind components;
+@tailwind utilities;
+
+@layer base {
+ :root {
+ --background: 0 0% 100%;
+ --foreground: 0 0% 3.9%;
+ --card: 0 0% 100%;
+ --card-foreground: 0 0% 3.9%;
+ --popover: 0 0% 100%;
+ --popover-foreground: 0 0% 3.9%;
+ --primary: 0 0% 9%;
+ --primary-foreground: 0 0% 98%;
+ --secondary: 0 0% 96.1%;
+ --secondary-foreground: 0 0% 9%;
+ --muted: 0 0% 96.1%;
+ --muted-foreground: 0 0% 45.1%;
+ --accent: 0 0% 96.1%;
+ --accent-foreground: 0 0% 9%;
+ --destructive: 0 84.2% 60.2%;
+ --destructive-foreground: 0 0% 98%;
+ --border: 0 0% 89.8%;
+ --input: 0 0% 89.8%;
+ --ring: 0 0% 3.9%;
+ --radius: 0.5rem;
+ }
+
+ .dark {
+ --background: 0 0% 3.9%;
+ --foreground: 0 0% 98%;
+ --card: 0 0% 3.9%;
+ --card-foreground: 0 0% 98%;
+ --popover: 0 0% 3.9%;
+ --popover-foreground: 0 0% 98%;
+ --primary: 0 0% 98%;
+ --primary-foreground: 0 0% 9%;
+ --secondary: 0 0% 14.9%;
+ --secondary-foreground: 0 0% 98%;
+ --muted: 0 0% 14.9%;
+ --muted-foreground: 0 0% 63.9%;
+ --accent: 0 0% 14.9%;
+ --accent-foreground: 0 0% 98%;
+ --destructive: 0 62.8% 30.6%;
+ --destructive-foreground: 0 0% 98%;
+ --border: 0 0% 14.9%;
+ --input: 0 0% 14.9%;
+ --ring: 0 0% 83.1%;
+ }
+}
+
+@layer base {
+ * {
+ @apply border-border;
+ }
+ body {
+ @apply bg-background text-foreground;
+ }
+}
+EOF
+
+# Add path aliases to tsconfig.json
+echo "🔧 Adding path aliases to tsconfig.json..."
+node -e "
+const fs = require('fs');
+const config = JSON.parse(fs.readFileSync('tsconfig.json', 'utf8'));
+config.compilerOptions = config.compilerOptions || {};
+config.compilerOptions.baseUrl = '.';
+config.compilerOptions.paths = { '@/*': ['./src/*'] };
+fs.writeFileSync('tsconfig.json', JSON.stringify(config, null, 2));
+"
+
+# Add path aliases to tsconfig.app.json
+echo "🔧 Adding path aliases to tsconfig.app.json..."
+node -e "
+const fs = require('fs');
+const path = 'tsconfig.app.json';
+const content = fs.readFileSync(path, 'utf8');
+// Remove comments manually
+const lines = content.split('\n').filter(line => !line.trim().startsWith('//'));
+const jsonContent = lines.join('\n');
+const config = JSON.parse(jsonContent.replace(/\/\*[\s\S]*?\*\//g, '').replace(/,(\s*[}\]])/g, '\$1'));
+config.compilerOptions = config.compilerOptions || {};
+config.compilerOptions.baseUrl = '.';
+config.compilerOptions.paths = { '@/*': ['./src/*'] };
+fs.writeFileSync(path, JSON.stringify(config, null, 2));
+"
+
+# Update vite.config.ts
+echo "⚙️ Updating Vite configuration..."
+cat > vite.config.ts << 'EOF'
+import path from "path";
+import react from "@vitejs/plugin-react";
+import { defineConfig } from "vite";
+
+export default defineConfig({
+ plugins: [react()],
+ resolve: {
+ alias: {
+ "@": path.resolve(__dirname, "./src"),
+ },
+ },
+});
+EOF
+
+# Install all shadcn/ui dependencies
+echo "📦 Installing shadcn/ui dependencies..."
+pnpm install @radix-ui/react-accordion @radix-ui/react-aspect-ratio @radix-ui/react-avatar @radix-ui/react-checkbox @radix-ui/react-collapsible @radix-ui/react-context-menu @radix-ui/react-dialog @radix-ui/react-dropdown-menu @radix-ui/react-hover-card @radix-ui/react-label @radix-ui/react-menubar @radix-ui/react-navigation-menu @radix-ui/react-popover @radix-ui/react-progress @radix-ui/react-radio-group @radix-ui/react-scroll-area @radix-ui/react-select @radix-ui/react-separator @radix-ui/react-slider @radix-ui/react-slot @radix-ui/react-switch @radix-ui/react-tabs @radix-ui/react-toast @radix-ui/react-toggle @radix-ui/react-toggle-group @radix-ui/react-tooltip
+pnpm install sonner cmdk vaul embla-carousel-react react-day-picker react-resizable-panels date-fns react-hook-form @hookform/resolvers zod
+
+# Extract shadcn components from tarball
+echo "📦 Extracting shadcn/ui components..."
+tar -xzf "$COMPONENTS_TARBALL" -C src/
+
+# Create components.json for reference
+echo "📝 Creating components.json config..."
+cat > components.json << 'EOF'
+{
+ "$schema": "https://ui.shadcn.com/schema.json",
+ "style": "default",
+ "rsc": false,
+ "tsx": true,
+ "tailwind": {
+ "config": "tailwind.config.js",
+ "css": "src/index.css",
+ "baseColor": "slate",
+ "cssVariables": true,
+ "prefix": ""
+ },
+ "aliases": {
+ "components": "@/components",
+ "utils": "@/lib/utils",
+ "ui": "@/components/ui",
+ "lib": "@/lib",
+ "hooks": "@/hooks"
+ }
+}
+EOF
+
+echo "✅ Setup complete! You can now use Tailwind CSS and shadcn/ui in your project."
+echo ""
+echo "📦 Included components (40+ total):"
+echo " - accordion, alert, aspect-ratio, avatar, badge, breadcrumb"
+echo " - button, calendar, card, carousel, checkbox, collapsible"
+echo " - command, context-menu, dialog, drawer, dropdown-menu"
+echo " - form, hover-card, input, label, menubar, navigation-menu"
+echo " - popover, progress, radio-group, resizable, scroll-area"
+echo " - select, separator, sheet, skeleton, slider, sonner"
+echo " - switch, table, tabs, textarea, toast, toggle, toggle-group, tooltip"
+echo ""
+echo "To start developing:"
+echo " cd $PROJECT_NAME"
+echo " pnpm dev"
+echo ""
+echo "📚 Import components like:"
+echo " import { Button } from '@/components/ui/button'"
+echo " import { Card, CardHeader, CardTitle, CardContent } from '@/components/ui/card'"
+echo " import { Dialog, DialogContent, DialogTrigger } from '@/components/ui/dialog'"
diff --git a/mini_agent/skills/web-artifacts-builder/scripts/shadcn-components.tar.gz b/mini_agent/skills/web-artifacts-builder/scripts/shadcn-components.tar.gz
new file mode 100644
index 00000000..cdbe7cdd
Binary files /dev/null and b/mini_agent/skills/web-artifacts-builder/scripts/shadcn-components.tar.gz differ
diff --git a/pyproject.toml b/pyproject.toml
index aa222327..a77796c5 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -59,7 +59,16 @@ asyncio_mode = "auto"
# Disable warnings for Tool.execute method signature differences
# We intentionally use explicit parameters in subclasses for better type hints
disable = [
- "arguments-differ", # Allow subclasses to have different parameter signatures
+ "arguments-differ", # Allow subclasses to have different parameter signatures
+ "line-too-long", # Long lines acceptable in this codebase
+ "too-many-locals", # Complex functions with many local variables
+ "too-many-statements", # Complex functions
+ "too-many-branches", # Complex control flow
+ "too-many-arguments", # Functions with many parameters
+ "too-many-positional-arguments",
+ "too-many-instance-attributes",
+ "too-few-public-methods", # Data classes and simple containers
+ "duplicate-code", # Duplicate code across files (R0801)
]
[dependency-groups]