This is a Flask-based API that processes user queries, ensuring they are non-toxic and appropriate. It uses pre-trained NLP models for query classification, toxicity detection, and query rewriting.
- 🛡️ Detects and blocks toxic or inappropriate content.
- 🧠 Classifies query intents and restricts certain types of queries.
- ✍️ Rewrites queries for better clarity.
- 🤖 Generates responses to rewritten queries.
- Flask: Backend framework for the API.
- Transformers Library: For text classification and query rewriting.
- Detoxify: For toxicity detection in text.
Processes user queries by:
- Checking for toxicity.
- Classifying intent.
- Rewriting queries for clarity.
- Generating a response.
{
"query": "Your input query here"
}🚫 If query contains toxicity:
{
"query": "Query contains inappropriate content."
}Status Code: 403
☣️ If query is restricted:
{
"query": "Restricted query detected."
}Status Code: 403
✅ If query is valid:
{
"query": "LLM Response to: Your clarified query"
}Status Code: 200
- Python 3.8 or later
- pip package manager
- Clone the repository:
git clone git@github.com:Clecotech/SafeLLM.git
cd SafeLLM
- Install dependencies:
pip install -r requirements.txt
- Run the application:
python app.py - The app will run on http://localhost:5001 🌐
You can set up a virtual environment for this project:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
curl -X POST http://localhost:5001/query \
-H "Content-Type: application/json" \
-d '{"query": "Explain the process of natural language processing."}'