Skip to content

DenDen047/pytorch-template

Repository files navigation

PyTorch-based Project Template

Setup

Docker

$ docker compose up -d --build
$ docker compose down && docker compose up -d --build && docker exec -it [container_name] bash

uv

$ uv python pin 3.11
$ uv venv --python 3.11
$ source .venv/bin/activate
$ uv init

## uv add ...
$ uv add ruff

## run python program
$ uv run python main.py

$ uv lock
## uv sync    # installs everything into .venv
$ uv export --format=requirements.txt > requirements.txt
$ deactivate

or

## copy pyproject.toml and uv.lock
$ uv sync

ref:

GPU Cloud Setup

Lambda Cloud

ref: https://lambda.ai/blog/set-up-a-tensorflow-gpu-docker-container-using-lambda-stack-dockerfile

ssh ubuntu@IP_ADDRESS -i ~/.ssh/lambda_cloud
curl -fsSL https://raw.githubusercontent.com/DenDen047/dotfiles/refs/heads/master/setup_scripts/lambda_cloud1.sh | bash
# if failed in the last step
sudo apt-get update && sudo apt-get install -y lambda-stack-cuda && sudo reboot

# after reboot, run the following command
curl -fsSL https://raw.githubusercontent.com/DenDen047/dotfiles/refs/heads/master/setup_scripts/lambda_cloud2.sh | bash

You can easily upload files to the cloud using the FTP/SFTP/SSH Sync Tool extension.

Modal

ref: https://modal.com/docs/guide

modal setup
modal run src/modal_sample.py

Usage

Project Configuration

.
├── README.md            # プロジェクト概要と使い方を記述
├── conf/                # 実験設定ファイル (例: parameters.yml, secrets.yml)
├── data/                # データや中間成果物の一時保存場所
├── notebooks/           # JupyterLabでの実験ノート
├── pyproject.toml       # Pythonプロジェクトの主要設定ファイル (PEP 518準拠)
├── setup.cfg            # pyproject.toml未対応の設定を補完
├── specs/               # 仕様書やドキュメント
└── src/                 # Pythonパッケージコード (共通処理の関数やクラスなど)

Data directory

Please see the details here.

data/
├── 01_raw/              # Original, immutable data from source systems
├── 02_intermediate/     # Partially processed (cleaned/transformed) data
├── 03_primary/          # Canonical datasets for feature engineering
├── 04_feature/          # Engineered features ready for modeling
├── 05_model_input/      # Data prepared specifically for model training
├── 06_models/           # Trained models (e.g., .pkl, .h5 files)
├── 07_model_output/     # Model outputs like predictions or embeddings
└── 08_reporting/        # Reports, visualizations, dashboards, final outputs

Development

Git Workflow

This project follows GitHub Flow + Git Worktree, optimized for AI agent collaboration.

main (always deployable)
 ├── feat/add-loss-function        ← short-lived feature branch
 ├── claude/refactor-trainer-a1b2  ← AI agent branch (via worktree)
 └── archive/exp/try-hyperparams-v1  ← preserved for reference only
  1. main is the single long-lived branch. It must always be in a working state.
  2. All work happens on short-lived branches from main → merged via Pull Request → branch deleted.
  3. Archiving: To keep a branch without merging (failed experiments, etc.), rename it to archive/<original-name>. Archived branches must not be merged into main.

Working with AI Agents (Worktree)

Git Worktree gives each AI agent an isolated working directory, so you and multiple agents can work in parallel without conflicts.

# You work normally in the repo
git switch -c feat/my-feature

# In another terminal, launch an AI agent in its own worktree
claude --worktree feat/add-augmentation
# → creates .claude/worktrees/feat/add-augmentation/ (isolated from your work)

# Run another agent in parallel — no conflicts
claude --worktree fix/normalize-bug

When the agent finishes: changes → keep worktree, push, create PR. No changes → auto-cleaned.

One-time setup:

# .gitignore
.claude/worktrees/

# .worktreeinclude — auto-copy these gitignored files to new worktrees
.env
.env.local
conf/local/**

Quick Reference

Human:  main ── feat/xxx ──→ PR ──→ merge ──→ delete branch
Agent:  main ── [worktree] claude/xxx ──→ PR ──→ merge ──→ auto-clean
Keep:   any branch ──→ archive/branch-name (read-only, never merge)

Commit Message

Following commitlint rule:

Jupyter Notebook on Cursor Editor

<iframe width="560" height="315" src="https://www.youtube.com/embed/eOSfeBIBzr0?si=MFjxL47thNJGC1SN" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

Visualization

  • colormap: turbo

Naming Conventions for Rotation / Transformation Matrices

ref: https://en.wikipedia.org/wiki/Active_and_passive_transformation

To avoid confusion, we distinguish between active and passive interpretations:

  • Active rotation / transformation

    • Variables: R_active, R_apply, R_obj, T_active
    • Meaning: Actively rotating points or vectors (e.g., applying to a point cloud).
  • Passive rotation / transformation

    • Variables: R_world_to_cam, R_frame, R_pose, T_world_to_cam
    • Meaning: Changing the coordinate frame (e.g., camera extrinsics).
  • Other common conventions

    • R_ext, T_ext: Extrinsic parameters (world → camera transformation).
    • R_int, K: Intrinsic parameters (camera matrix).
    • R_wc, R_cw: Shorthand for R_world_to_cam, R_cam_to_world.

Conversion between Active and Passive

  • Rotation matrices

    • R_passive = R_active.T
    • R_active = R_passive.T
  • Transformation matrices (SE(3))

    • T_passive = T_active^-1
    • T_active = T_passive^-1

This ensures consistent handling of both interpretations.

Useful Tools

Reference

Sample Projects

Claude Code

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors