This is the api that checks whether the given text is toxic or not. It also gives the measure of toxicity whether it is toxic, severely toxic, obscene, threat, insult or identity hate. The models were trained on the dataset from kaggle toxic comment classification challenge.
The dataset can be found here
- pipeline of tfidf vectorizer and linearSVC
- pipeline of tfidf vectorizer and logistic Regression
- GET /classify/string:comment_id : This route helps in getting all the text of a certain
comment_idfrom the database.comment_idshould be passed in the url. Ifcomment_iddoesn't exist then it will be informed. - POST /classify/string:comment_id : This route is for posting the data ie
commentto the database. Ifcomment_idalready exists or doesn't exist then it will be informed. - PUT /classify/string:comment_id : This route is for updating the
commentof the particularcomment_id. Ifcomment_iddoesnot exist, it will be informed. - DELETE /classify/string:comment_id : This route is for deleting the text of particular
comment_id. Ifcomment_iddoesnot exist, it will be informed. - GET /all_data : This route is for getting all the list of text/comment from the database.