A Discord bot that reviews English chat messages and returns minimal, natural-sounding corrections for conversational American English.
Instead of over-correcting every message, Fluentify tries to stay quiet when a sentence is already good. If the input is natural enough, it reacts with a ✅. If the input needs improvement, it replies with a corrected version.
- Minimal-intervention correction style for natural chat English
- Discord-native workflow with reactions for already-good messages
- Context-aware message review using previously reviewed conversation history
- Fallback LLM strategy for better resilience under timeout or rate-limit issues
- Automated tests for rules, regressions, and pipeline behavior
- Lightweight health-check server for simple deployment setups
flowchart LR
A[Discord User Message] --> B[fluentify/discord_app.py]
B --> C[fluentify/pipeline.py]
C --> D[Context Builder]
C --> E[LLM Correction Engine]
E --> F[Groq Client]
C --> G[fluentify/core.py]
C --> H[fluentify/config.py]
B --> I[Discord Reply or ✅ Reaction]
J[keep_alive.py] --> K[Flask Health Check]
L[tests/] --> C
L --> G
flowchart TD
A[New message received] --> B{From the bot itself?}
B -- Yes --> X[Ignore]
B -- No --> C{In target channel?}
C -- No --> X
C -- Yes --> D[Build reviewed context]
D --> E[Generate correction]
E --> F{Result}
F -- NOT_ENGLISH or empty --> X
F -- ERROR --> X
F -- PERFECT --> G[Add ✅ reaction]
F -- Corrected sentence --> H[Reply with correction]
- A user sends a message in the configured Discord channel.
- The bot ignores its own messages and anything outside the target channel.
- The pipeline collects compact reviewed context from recent messages:
- messages approved by the bot with ✅
- messages previously corrected by the bot
- The current sentence and context are sent to the LLM with instructions to preserve meaning, speaker, and sentence type.
- The final result is handled like this:
PERFECT→ add ✅ reactionERRORor empty output → do nothing- corrected sentence → reply to the original message
fluentify-bot/
├── main.py # Starts the health-check thread and Discord bot
├── keep_alive.py # Flask app for uptime / health checks
├── .env.default # Example environment variables
├── requirements.in # Runtime dependency input
├── requirements.txt # Pinned runtime dependencies
├── dev-requirements.in # Development dependency input
├── dev-requirements.txt # Pinned development dependencies
├── pytest.ini # Pytest configuration
├── fluentify/
│ ├── config.py # Env loading, model settings, limits, fallbacks
│ ├── core.py # Text normalization and output sanitization utilities
│ ├── discord_app.py # Discord client and event handlers
│ └── pipeline.py # Context building, LLM calls, correction pipeline
└── tests/
├── conftest.py # Test fixtures and helpers
├── test_core_rules.py # Rule and normalization tests
├── test_pipeline.py # Context and message-processing tests
└── test_regressions.py # Regression tests for edge cases
Before running the bot, make sure you have:
- Python 3.11 or newer
- A Discord bot token
- A Groq API key
Copy .env.default to .env and fill in your real credentials:
DISCORD_BOT_TOKEN=<YOUR_DISCORD_BOT_TOKEN>
LLM_API_KEY=<YOUR_LLM_API_KEY>
LLM_API_URL=https://api.groq.com/openai/v1DISCORD_BOT_TOKENLLM_API_KEY
LLM_API_URL appears in the example file, but the current code does not actively use it.
git clone <your-repository-url>
cd fluentify-bot
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r requirements.txtThen create your .env file:
cp .env.default .envpython main.pyOn startup:
- Flask starts a small web server on port
8000 - the Discord client logs in with your bot token
- the bot begins listening for messages in the configured channel
Expected console output:
✅ Logged in as: <your bot user>
Fluentify is now running!
The bot currently operates only in the Discord channel named:
english-chat
This is controlled by TARGET_CHANNEL_NAME in fluentify/config.py.
The current code includes these built-in controls:
- Temperature:
0.2 - Timeout per LLM call:
12seconds - Max context length:
1200characters - Max stored history message length:
220characters - Max reviewed history messages:
5 - LLM concurrency limit:
3
It also attempts multiple fallback models in sequence when a timeout, rate limit, or other API issue occurs.
Install development dependencies:
pip install -r dev-requirements.txtRun the test suite:
pytestThe tests cover:
- normalization and sanitization helpers
- reviewed-history context assembly
- message approval and reply-correction behavior
- timeout and rate-limit fallback behavior
- regressions to reduce over-correction
This project includes a lightweight Flask server in keep_alive.py so platforms such as Koyeb can perform health checks.
- Health endpoint:
/ - Default port:
8000 - Response text:
Fluentify Bot is Alive!
- The bot listens in one hard-coded channel:
english-chat. spacyis imported influentify/config.py, but it does not appear inrequirements.txtand does not appear to be used elsewhere.LLM_API_URLexists in.env.default, but it is not currently wired into client initialization.- Context is intentionally compact to reduce token usage and latency.
- Make the target channel configurable via environment variables
- Support slash commands or per-server configuration
- Add structured logging and correction metrics
- Improve multi-user context handling further
- Make provider and model selection configurable
- Add Docker support for simpler deployment
This README is designed to help new contributors understand the project quickly, run it locally, and see the core message-processing flow at a glance.