A personal knowledge management system with AI-assisted organization. Capture raw ideas through a web interface, which are immediately persisted to SQLite. A background "AI Gardener" processes notes to extract titles, generate embeddings, suggest links, and propose tags. All AI suggestions are reviewed before being finalized. The system exports to hierarchical markdown files tracked in git for long-term durability.
- Immediate Persistence: Ideas saved instantly to SQLite
- AI Gardener: Background processing for:
- Title extraction
- Embedding generation
- Similar note finding
- Link suggestions
- Tag suggestions
- Human-in-the-Loop: Review and approve/reject all AI suggestions
- Markdown Export: Human-readable backup with YAML frontmatter
- Git Integration: Version history for your knowledge base
- Python 3.9+ installed
- Ollama installed and running (https://ollama.com/download)
- Required models pulled:
ollama pull nomic-embed-text ollama pull llama3.2
# PowerShell (recommended - has automatic port detection)
.\start-perknow.ps1
# Or with a specific starting port
.\start-perknow.ps1 -StartPort 9000
# Skip auto-opening browser
.\start-perknow.ps1 -NoBrowser:: Command Prompt
start-perknow.bat# Terminal 1: Initialize directories and git
mkdir -p data export
mkdir -p data export
cd export && git init && cd ..
# Terminal 1: Start AI gardener worker
python scripts/gardener_worker.py
# Terminal 2: Start web server (auto-finds available port)
uvicorn perknow.main:app --reload --port 8003
# Open browser to http://localhost:8003perknow/
├── perknow/ # Main Python package
│ ├── main.py # FastAPI application
│ ├── database.py # SQLite operations
│ ├── llm_client.py # Ollama integration
│ ├── gardener.py # AI processing logic
│ ├── exporter.py # Markdown export
│ └── templates/ # HTMX + Jinja2 templates
├── scripts/
│ └── gardener_worker.py # Background AI processor
├── data/ # SQLite database (gitignored)
├── export/ # Markdown + Git backup
├── start-perknow.ps1 # Quick start script (PowerShell)
├── start-perknow.bat # Quick start script (CMD)
└── requirements.txt # Python dependencies
- Open http://localhost:8003
- Type your raw idea in the textarea
- Click "Plant Idea"
- The note is immediately saved to SQLite and exported to
export/inbox/
The gardener worker automatically:
- Extracts a concise title
- Generates vector embeddings
- Finds similar notes
- Suggests relevant links
- Proposes tags
All suggestions are marked as ai_suggested=TRUE, user_approved=FALSE
- Navigate to the Review page
- See all pending AI suggestions
- Click ✅ to approve or ❌ to reject each suggestion
- Approved suggestions update the markdown export
- Browse all notes with pagination
- Click any note to view full details
- See outbound links and backlinks
- View and manage tags
All approved notes are exported to export/ with:
- YAML frontmatter (id, title, timestamps, tags)
- Wiki-style links:
[[Related Note]] - Git version history
Create a .env file to customize:
DATABASE_PATH=data/perknow.db
EXPORT_PATH=export/
OLLAMA_BASE_URL=http://localhost:11434
EMBEDDING_MODEL=nomic-embed-text
CHAT_MODEL=llama3.2
GARDENER_POLL_INTERVAL=5.0
EMBEDDING_DIMENSIONS=768| Method | Endpoint | Description |
|---|---|---|
| GET | / |
Redirect to inbox |
| GET | /inbox |
Capture new ideas |
| POST | /api/plant |
Create note & queue AI processing |
| GET | /browse |
List all notes |
| GET | /note/{id} |
View single note |
| GET | /review |
Review AI suggestions |
| POST | /api/approve-link/{id} |
Approve link suggestion |
| POST | /api/reject-link/{id} |
Reject link suggestion |
| POST | /api/approve-tag/{id} |
Approve tag suggestion |
| POST | /api/reject-tag/{id} |
Reject tag suggestion |
| GET | /health |
Health check |
pytest tests/See perknow/database.py for full schema. Main tables:
notes- Note content, embeddings, statuslinks- Bidirectional note relationshipstags- Note categorizationgardening_queue- Background AI processing jobs
- Add operation type in
perknow/gardener.py::OperationType - Implement handler in
Gardener.process_queue_item() - Queue operation when planting note in
main.py::api_plant()
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
│ Browser │──────▶│ FastAPI │──────▶│ SQLite │
│ (HTMX) │◀──────│ (perknow) │◀──────│ (perknow) │
└─────────────┘ └──────────────┘ └──────┬──────┘
│ │ │
│ ▼ │
│ ┌──────────────┐ │
└─────────────▶│ Markdown │◀────────────┘
│ Export │
│ (Git) │
└──────────────┘
▲
│
┌──────────────┐
│ AI Gardener │
│ (Ollama) │
└──────────────┘
The quick start script automatically finds an available port. If running manually:
# Find available port
netstat -ano | findstr :8000
# Use different port
uvicorn perknow.main:app --port 8003# Check if Ollama is running
curl http://localhost:11434/api/tags
# Start Ollama
ollama serve
# Pull required models
ollama pull nomic-embed-text
ollama pull llama3.2- Check gardener is running:
python scripts/gardener_worker.py - Check queue: Look at
gardening_queuetable in SQLite - Check Ollama is responding: Test with
curl
SQLite doesn't support concurrent writes. Ensure only one process writes to the database at a time.
MIT License - See LICENSE file
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request