Local-first spell checker powered by on-device LLMs. Copy Enter Paste! Jolly read from your clipboard and applies changes so you can paste it back. Nothing leaves your device.
Features | Installation | Models | Benchmarks | Screenshots
Spell checkers are annoying — squiggly lines and too much clicking. Pasting text into an AI with "fix spelling" works, but sending your mails and notes to LLM providers feels uneasy. Jolly does it locally and makes it fun.
Copy text, hit Enter, paste it back — corrected. Jolly reads your clipboard, passes it through a local LLM, and writes the result back. If your machine doesn't support local inference, you can use API keys or conventional grammar checking via Harper. Everything runs on your device.
Built with SvelteKit, Tailwind, Tauri, and lama.cpp in Rust and TypeScript. Jolly started as a way to learn frontend development and explore the trade-offs between local LLM inference and conventional grammar checkers. On one a "simple" task
- Privacy-first: All inference runs locally — nothing leaves your machine
- One-shot correction: Copy, press Enter, paste — corrected text is in your clipboard
- Multiple ways: Choose from on-device LLMs, Openrouter via Api Call or Harper
Download the latest release for your platform from GitHub Releases:
| Platform | File | Notes |
|---|---|---|
| macOS | Jolly_x.x.x_aarch64.dmg |
Apple Silicon (Intel via Rosetta) |
| Windows | Jolly_x.x.x_x64-setup.exe |
NSIS installer |
| Linux | Jolly_x.x.x_amd64.deb |
Debian/Ubuntu |
| Linux | Jolly_x.x.x_amd64.AppImage |
Universal |
Prerequisites:
git clone https://github.com/felixscode/jolly.git
cd jolly
npm install
npx tauri buildThanks to amazing lamacpp and vulcan Jolly detects GPU availability at runtime. If initialization fails, it silently falls back to CPU. It uses vulcan for win,linux and metal for mac.
The GRMR models are specifically fine-tuned for grammar correction — they take text in and return corrected text with minimal over-editing. The general-purpose models (Gemma, Mistral) are instruction-following LLMs prompted to fix spelling and grammar.
| Model | Type | Size | Quantization |
|---|---|---|---|
| GRMR V3 3B | Grammar-specialized | 2.0 GB | Q4_K_M |
| GRMR V3 4B | Grammar-specialized | 2.5 GB | Q4_K_M |
| Gemma 3 4B Instruct | General-purpose | 2.5 GB | Q4_K_M |
| Mistral 7B Instruct v0.3 | General-purpose | 4.7 GB | Q4_K_M |
Tested across 8 cases (short, medium, email) in English and German. Errors fixed measures how many spelling/grammar errors the model actually corrected out of 62 total. Inference on CPU — times will be significantly faster with a GPU.
| Model | Errors Fixed | Exact Match | Avg Latency |
|---|---|---|---|
| OpenRouter gpt-4o-mini | 62/62 | 6/8 | 1.5s |
| Mistral 7B Instruct v0.3 | 58/62 | 4/8 | 8.0s |
| GRMR V3 4B | 54/62 | 3/8 | 4.2s |
| GRMR V3 3B | 46/62 | 2/8 | 3.5s |
| Gemma 3 4B Instruct | 44/62 | 1/8 | 4.6s |
| Harper | 36/62 | 2/8 | 0.2s |
Which model should I use?
- Mistral 7B is the best local model overall (58/62) and the only one that handles German well. It needs 4.7 GB of RAM and is the slowest local option.
- GRMR V3 4B is the recommended tradeoff — fast, small (2.5 GB), and fixes 87% of errors. Best choice for English-only use.
- GRMR V3 3B is similar but lighter (2.0 GB) at the cost of some accuracy.
- Gemma 3 4B is a general-purpose model that works but can't match the grammar-specialized GRMR models.
- Harper is instant (0.2s) but only supports English and misses more complex errors.
- OpenRouter gives the best results but requires an API key and sends text to a remote server.
Run benchmarks yourself:
cargo run --release --bin benchmarkfromsrc-tauri/.
Models are downloaded on demand from Hugging Face and cached locally.
Tip
You can add any GGUF model via the settings in app.
- Tauri — desktop app framework
- Rust — systems programming language
- Svelte — reactive UI framework
- llama.cpp — local LLM inference engine
- Harper — grammar checker
- GRMR — grammar fine-tuned models
- Gemma — Google's open models
GPL-3.0 — free and open source. No account, no subscription.


