Official PyTorch implementation of "ViBiDSampler: Enhancing Video Interpolation Using Bidirectional Diffusion Sampler".
Our source code relies on generative-models. Please clone the generatvie-models, and then place vibidsampler.py to the directory scripts/sampling.
Follow the environment setting from the generative-models.
Download the Stable Video Diffusion (SVD-XT) weights from here.
Specify the path to the downloaded model in the ckpt_path field of scripts/sampling/configs/svd_xt.yaml.
In order to inference, run:
python scripts/sampling/vibidsampler.py
- The paths to the source frames should be specified using the flags
input_start_pathandinput_end_path. - You can adjust the
fps_id(approximately between 6 and 24) according to the specific use case.
@inproceedings{
yang2025vibidsampler,
title={ViBiDSampler: Enhancing Video Interpolation Using Bidirectional Diffusion Sampler},
author={Yang, Serin and Kwon, Taesung and Ye, Jong Chul},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=nNYA7tcJSE}
}
