Skip to content

bknight44/smart_Image_Labeler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

📷 Video Frame Extractor & YOLO Pre-Labeler

Python OpenCV YOLOv8 License

Welcome to the Video Frame Extractor & YOLO Pre-Labeler! This Python script automates frame extraction, person detection with YOLOv8, and Pascal VOC XML annotation generation. 🚀

🔧 Features

  • 🎥 Frame Extraction: Extracts frames at a specified interval.
  • 🕵️ Object Detection: Detects people using YOLOv8.
  • 📝 Annotation Generation: Converts detections to Pascal VOC XML.
  • 🔍 Manual Verification: Guides you to use imgLabeler for refining annotations.

📋 Prerequisites

  • 🐍 Python 3.8+
  • 📦 Install dependencies:
    pip install opencv-python ultralytics
    

📸 OpenCV for frame extraction

🧠 YOLOv8 model weights (auto-downloaded by ultralytics)

📌 imgLabeler (optional, for manual annotation refinement)

🚀 Getting Started

  1. Clone the Repository bash

git clone https://github.com/your-username/your-repo-name.git cd your-repo-name

Replace your-username and your-repo-name with your GitHub username and repository name. 2. Install Dependencies bash

pip install opencv-python ultralytics

  1. Prepare Your Video Place your input video (e.g., your_video.mp4) in the inputs/ directory. Update the video_path in script.py to point to your video file.
  2. Run the Script Run the script to extract frames, detect people, and generate annotations: bash

python script.py

Configuration Modify these variables in script.py: video_path: Path to your input video (e.g., "inputs/your_video.mp4").

output_dir: Directory for frames and annotations (e.g., "output/").

interval: Frame extraction interval (default: 30).

Example: python

video_path = "inputs/your_video.mp4" output_dir = "output" main(video_path, output_dir)

  1. Output The script generates: :framed_picture: Frames (.jpg) in output/.

📄 YOLO annotations (.txt) for each frame.

📃 Pascal VOC XML annotations (.xml) for each frame.

🔍 Manual Annotation with imgLabeler To refine pre-generated annotations, use imgLabeler (labelImg). How to Get imgLabeler Clone the Repository: bash

git clone https://github.com/tzutalin/labelImg.git cd labelImg

Install: bash

pip install -r requirements.txt python labelImg.py

Use imgLabeler: Open labelImg and select Open Dir to load the output/ directory.

Verify and edit bounding boxes for people.

Save changes to update XML files.

See imgLabeler GitHub for more details. :open_file_folder: Project Structure

your-repo-name/ ├── inputs/ # Input videos │ └── your_video.mp4 ├── output/ # Frames and annotations │ ├── frame_0000.jpg │ ├── frame_0000.txt │ ├── frame_0000.xml │ └── ... ├── script.py # Main script └── README.md # This file

💡 Tips 🧠 Use yolov8m.pt or yolov8l.pt for higher accuracy (edit model = YOLO('yolov8n.pt') in script.py).

⏲️ Adjust interval in extract_frames() to control frame extraction frequency.

🔎 Always verify YOLO detections in imgLabeler for accuracy.

🐛 Troubleshooting Video not found: Ensure video_path is correct.

Module not found: Install dependencies with pip install opencv-python ultralytics.

YOLO model issues: Ensure internet access for model download.

imgLabeler issues: Check imgLabeler documentation.

🤝 Contributing Contributions are welcome! Open issues or pull requests to improve the script or docs. 📜 License This project is licensed under the MIT License. See the LICENSE file. Happy labeling! 🎉

About

This repo is an open source collection of tools to help label computer vision images and videos

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages