Our solution is a browser extension / platform plugin that detects and flags harmful or low-quality content on short-form video platforms, with initial support for Instagram Reels and/or YouTube Shorts.
The tool can be toggled on or off by the user. It uses feature extraction techniques to analyze signals from the video, audio, metadata, and engagement patterns, to predict whether AI-generated content is appropriate for younger audiences. With AI-generated videos becoming increasingly prevalent on social media, children are exposed to a growing volume of synthetic content that can be misleading, age-inappropriate, or simply unvetted. Our tool helps parents and guardians regulate that exposure by automatically identifying and flagging AI-generated Reels or Shorts reaching young viewers.
- Eric Azayev
- Samira Maria
- Neha Sudarshan
- Preet Patel
cd backend
cp .env.example .env # edit if needed
docker-compose up -d # starts API + worker + Redis + PostgresAPI will be live at http://localhost:8000 Docs at http://localhost:8000/docs
Test it:
curl -X POST http://localhost:8000/api/v1/analyze/sync \
-H "Content-Type: application/json" \
-d '{"media_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/PNG_transparency_demonstration_1.png/280px-PNG_transparency_demonstration_1.png", "media_type": "image"}'- Open Chrome →
chrome://extensions - Enable Developer Mode (top right)
- Click Load Unpacked
- Select the
extension/folder - Browse to any image-heavy page — badges will appear
Browser Extension (content.js)
↓ POST /api/v1/analyze/sync
FastAPI (main.py)
↓ runs inference
HuggingFace Model (umm-maybe/AI-image-detector)
↓ returns score
Extension injects badge with verdict
Edit backend/app/services/detector.py, change MODEL_ID:
# Good alternatives to benchmark:
# "Organika/sdxl-detector" — better on Stable Diffusion outputs
# "haywoodsloan/ai-image-detector" — ensemble approach
# "prithivMLmods/Deep-Fake-Detector-Model" — face-focused