A minimal FastAPI service for image moderation (binary NSFW detection) using prithivMLmods/Nsfw_Image_Detection_OSS.
python3 -m venv venv
source venv/bin/activateOn Windows (PowerShell):
python -m venv venv
.\venv\Scripts\Activate.ps1python -m pip install --upgrade pip
python -m pip install --no-input torch torchvision fastapi uvicorn pillow transformersAlternative: if you prefer dependency pinning from a file, you can generate one first with
python -m pip freeze > requirements.txtuvicorn app:app --host 0.0.0.0 --port 8000The service will be available at http://localhost:8000.
On first startup, the model weights are downloaded on demand, so the first run may take a bit longer.
curl -X 'POST' \
'http://localhost:8000/classify' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'file=@TRUcIH-U3IZYUqVF.avif;type=image/avif'Expected response shape:
{
"filename": "TRUcIH-U3IZYUqVF.avif",
"predictions": [
{
"label": "NSFW",
"score": 0.9461
},
{
"label": "SFW",
"score": 0.0539
}
]
}This API uses the Hugging Face model:
prithivMLmods/Nsfw_Image_Detection_OSS- Labels:
Class 0: SFW,Class 1: NSFW - Task: Image classification
- Framework: Transformers (
AutoImageProcessor,AutoModelForImageClassification)
The API is designed to run correctly on CPU-only environments and does not require a GPU.
For practical usage, keep RAM realistic for your traffic pattern (the model is compact and inference is lightweight for a vision classifier; small to moderate concurrent workloads are suitable on standard CPU instances).
This service is intended for image moderation workloads such as content filtering, platform safety checks, dataset cleaning, and enterprise policy enforcement.