Code for the ICRA 2026 paper
HetroD: A High-Fidelity Drone Dataset and Benchmark for Autonomous Driving in Heterogeneous Traffic
Yu-Hsiang Chen, Wei-Jer Chang, Christian Kotulla, Thomas Keutgens, Steffen Runde, Tobias Moers, Christoph Klas, Wei Zhan, Masayoshi Tomizuka, Yi-Ting Chen
National Yang Ming Chiao Tung University, UC Berkeley, fka GmbH
This repository provides compact tools for converting drone-view traffic datasets into formats used by autonomous driving simulation, motion prediction, and planning pipelines.
Current support:
- Drone datasets:
HetroD,inD,INTERACTION,SinD - Output formats: ScenarioNet, VBD, Scenario Dreamer
Core capabilities:
- Ego-centric scenario alignment
- Scenario segmentation from long recordings
- Map and trajectory conversion into ScenarioNet-compatible format
drone-tool/
├── scenarionet-converter/
│ ├── hetrod_scene.py
│ ├── inD_scene.py
│ ├── interaction_scene.py
│ └── sind_scene.py
├── scenarionet-VBD-converter/
│ └── convert_scenarionet_to_vbd.py
└── scenarionet-scenariodreamer-converter/
└── scenarionet_to_scenariodreamer_waymo.py
Please install the required ScenarioNet and MetaDrive environments first:
Then install the common Python dependencies:
pip install numpy pandas scipy shapely lxml utm tqdm matplotlib omegaconfThe ScenarioNet converters now normalize all supported datasets to a Waymo-like scenario layout:
91timesteps10 Hzts = 0.0 ... 9.0current_time_index = 10
Source frame rates used internally:
HetroD:30 HzinD:25 HzINTERACTION:10 HzSinD:29.97 Hz
The converters compute the correct raw window from the dataset frame rate automatically. You usually do not need to pass --segment_size; if you do, it is ignored when Waymo alignment is enabled.
Convert HetroD to ScenarioNet:
python scenarionet-converter/hetrod_scene.py \
--root_dir /path/to/HetroD-dataset-v1.1 \
--output_dir /path/to/outputConvert inD to ScenarioNet:
python scenarionet-converter/inD_scene.py \
--root_dir /path/to/inD-dataset-v1.1 \
--output_dir /path/to/outputConvert INTERACTION to ScenarioNet:
python scenarionet-converter/interaction_scene.py \
--root_dir /path/to/INTERACTION \
--output_dir /path/to/outputConvert SinD to ScenarioNet:
python scenarionet-converter/sind_scene.py \
--root_dir /path/to/SinD \
--output_dir /path/to/outputConvert ScenarioNet to VBD:
python scenarionet-VBD-converter/convert_scenarionet_to_vbd.py \
--input_dir /path/to/scenarionet \
--output_dir /path/to/vbd \
--include_rawNotes:
convert_scenarionet_to_vbd.pynow infers frame rate frommetadata["ts"]by default.- Only pass
--frame_rateif you explicitly want to override the inferred value.
Convert ScenarioNet to Scenario Dreamer:
python scenarionet-scenariodreamer-converter/scenarionet_to_scenariodreamer_waymo.py \
--input_dir /path/to/scenarionet \
--output_dir /path/to/scenariodreamer \
--cfg_path cfgs/dataset/waymo_autoencoder_temporal.yaml \
--train_ratio 0.7 \
--val_ratio 0.2 \
--test_ratio 0.1 \
--seed 0Place this script inside Scenario Dreamer's scripts/ directory before running.
Notes:
- The Scenario Dreamer converter now expects ScenarioNet inputs that are already normalized to
10 Hz / 91steps. - It uses
metadata["sdc_id"]instead of assuming the SDC track is renamed to"ego".
ScenarioNet conversion produces:
- Scenario
.pklfiles dataset_summary.pkldataset_mapping.pkl
The generated ScenarioNet files are aligned to the official Waymo ScenarioNet schema at the format level:
- top-level keys match Waymo samples
- metadata keys match Waymo samples
- track state array shapes and dtypes match Waymo samples
- lane features use Waymo-style
left_boundaries,right_boundaries,left_neighbor,right_neighbor,width,speed_limit_kmh, andspeed_limit_mph
Known remaining semantic differences from raw Waymo data:
dynamic_map_statesis empty unless the source dataset provides traffic light state- lane boundary / neighbor intervals are simplified compared with Waymo's finer-grained chunked lane semantics
@inproceedings{hetrod,
title={HetroD: A High-Fidelity Drone Dataset and Benchmark for Autonomous Driving in Heterogeneous Traffic},
author={Yu-Hsiang Chen and Wei-Jer Chang and Christian Kotulla and Thomas Keutgens and Steffen Runde and Tobias Moers and Christoph Klas and Wei Zhan and Masayoshi Tomizuka and Yi-Ting Chen},
booktitle={Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)},
year={2026}
}