Core dataset management and preprocessing tools:
record_rps_video.py- Record webcam videos of hand gesturespreprocess_and_export_landmarks.py- Extract MediaPipe hand landmarks from videoscreate_video_splits.py- Generate train/test/validation dataset splitsfeatures/- Exported landmark coordinates in .npz formatfeatures_angles/- Hand joint angle features derived from landmarksfeatures_coords/- Processed 3D coordinate featureslandmarks/- Raw landmark data extracted from videos
DTW-based classification experiments:
dtw.ipynb- Dynamic Time Warping classification experimentscheck.ipynb- Data validation and analysis
Video processing and data alignment tools:
video_frames_to_img.py- Extract individual frames from videosdata_check.ipynb- Validate landmark data at cropped video frames
-
Set up the environment:
python -m venv rps_venv source rps_venv/bin/activate # On Windows: rps_venv\Scripts\activate pip install -r my_rps_dataset/requirements.txt
-
Record gesture videos:
cd my_rps_dataset python record_rps_video.py rock python record_rps_video.py paper python record_rps_video.py scissors -
Extract landmarks and features:
python preprocess_and_export_landmarks.py
-
Create dataset splits:
python create_video_splits.py --data_dir features/ --out_file splits.json