$ docker compose up -d --build
$ docker compose down && docker compose up -d --build && docker exec -it [container_name] bash$ uv python pin 3.11
$ uv venv --python 3.11
$ source .venv/bin/activate
$ uv init
## uv add ...
$ uv add ruff
## run python program
$ uv run python main.py
$ uv lock
## uv sync # installs everything into .venv
$ uv export --format=requirements.txt > requirements.txt
$ deactivateor
## copy pyproject.toml and uv.lock
$ uv syncref:
ref: https://lambda.ai/blog/set-up-a-tensorflow-gpu-docker-container-using-lambda-stack-dockerfile
ssh ubuntu@IP_ADDRESS -i ~/.ssh/lambda_cloudcurl -fsSL https://raw.githubusercontent.com/DenDen047/dotfiles/refs/heads/master/setup_scripts/lambda_cloud1.sh | bash
# if failed in the last step
sudo apt-get update && sudo apt-get install -y lambda-stack-cuda && sudo reboot
# after reboot, run the following command
curl -fsSL https://raw.githubusercontent.com/DenDen047/dotfiles/refs/heads/master/setup_scripts/lambda_cloud2.sh | bashYou can easily upload files to the cloud using the FTP/SFTP/SSH Sync Tool extension.
ref: https://modal.com/docs/guide
modal setup
modal run src/modal_sample.py.
├── README.md # プロジェクト概要と使い方を記述
├── conf/ # 実験設定ファイル (例: parameters.yml, secrets.yml)
├── data/ # データや中間成果物の一時保存場所
├── notebooks/ # JupyterLabでの実験ノート
├── pyproject.toml # Pythonプロジェクトの主要設定ファイル (PEP 518準拠)
├── setup.cfg # pyproject.toml未対応の設定を補完
├── specs/ # 仕様書やドキュメント
└── src/ # Pythonパッケージコード (共通処理の関数やクラスなど)Please see the details here.
data/
├── 01_raw/ # Original, immutable data from source systems
├── 02_intermediate/ # Partially processed (cleaned/transformed) data
├── 03_primary/ # Canonical datasets for feature engineering
├── 04_feature/ # Engineered features ready for modeling
├── 05_model_input/ # Data prepared specifically for model training
├── 06_models/ # Trained models (e.g., .pkl, .h5 files)
├── 07_model_output/ # Model outputs like predictions or embeddings
└── 08_reporting/ # Reports, visualizations, dashboards, final outputsThis project follows GitHub Flow + Git Worktree, optimized for AI agent collaboration.
main (always deployable)
├── feat/add-loss-function ← short-lived feature branch
├── claude/refactor-trainer-a1b2 ← AI agent branch (via worktree)
└── archive/exp/try-hyperparams-v1 ← preserved for reference only
mainis the single long-lived branch. It must always be in a working state.- All work happens on short-lived branches from
main→ merged via Pull Request → branch deleted. - Archiving: To keep a branch without merging (failed experiments, etc.), rename it to
archive/<original-name>. Archived branches must not be merged intomain.
Git Worktree gives each AI agent an isolated working directory, so you and multiple agents can work in parallel without conflicts.
# You work normally in the repo
git switch -c feat/my-feature
# In another terminal, launch an AI agent in its own worktree
claude --worktree feat/add-augmentation
# → creates .claude/worktrees/feat/add-augmentation/ (isolated from your work)
# Run another agent in parallel — no conflicts
claude --worktree fix/normalize-bugWhen the agent finishes: changes → keep worktree, push, create PR. No changes → auto-cleaned.
One-time setup:
# .gitignore
.claude/worktrees/
# .worktreeinclude — auto-copy these gitignored files to new worktrees
.env
.env.local
conf/local/**Human: main ── feat/xxx ──→ PR ──→ merge ──→ delete branch
Agent: main ── [worktree] claude/xxx ──→ PR ──→ merge ──→ auto-clean
Keep: any branch ──→ archive/branch-name (read-only, never merge)
Following commitlint rule:
<iframe width="560" height="315" src="https://www.youtube.com/embed/eOSfeBIBzr0?si=MFjxL47thNJGC1SN" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>- colormap:
turbo
ref: https://en.wikipedia.org/wiki/Active_and_passive_transformation
To avoid confusion, we distinguish between active and passive interpretations:
-
Active rotation / transformation
- Variables:
R_active,R_apply,R_obj,T_active - Meaning: Actively rotating points or vectors (e.g., applying to a point cloud).
- Variables:
-
Passive rotation / transformation
- Variables:
R_world_to_cam,R_frame,R_pose,T_world_to_cam - Meaning: Changing the coordinate frame (e.g., camera extrinsics).
- Variables:
-
Other common conventions
R_ext,T_ext: Extrinsic parameters (world → camera transformation).R_int,K: Intrinsic parameters (camera matrix).R_wc,R_cw: Shorthand forR_world_to_cam,R_cam_to_world.
-
Rotation matrices
R_passive = R_active.TR_active = R_passive.T
-
Transformation matrices (SE(3))
T_passive = T_active^-1T_active = T_passive^-1
This ensures consistent handling of both interpretations.