Skip to content

vibidsampler/vibid

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 

Repository files navigation

[ICLR 2025] ViBiDSampler: Enhancing Video Interpolation Using Bidirectional Diffusion Sampler

Paper | Project page

Official PyTorch implementation of "ViBiDSampler: Enhancing Video Interpolation Using Bidirectional Diffusion Sampler".

How to use

1. Environment setting

Our source code relies on generative-models. Please clone the generatvie-models, and then place vibidsampler.py to the directory scripts/sampling. Follow the environment setting from the generative-models.

2. Pre-trained model

Download the Stable Video Diffusion (SVD-XT) weights from here.
Specify the path to the downloaded model in the ckpt_path field of scripts/sampling/configs/svd_xt.yaml.

3. Video interpolation

In order to inference, run:

python scripts/sampling/vibidsampler.py
  • The paths to the source frames should be specified using the flags input_start_path and input_end_path.
  • You can adjust the fps_id (approximately between 6 and 24) according to the specific use case.

Citation

@inproceedings{
yang2025vibidsampler,
title={ViBiDSampler: Enhancing Video Interpolation Using Bidirectional Diffusion Sampler},
author={Yang, Serin and Kwon, Taesung and Ye, Jong Chul},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=nNYA7tcJSE}
}

About

Official PyTorch implementation of "ViBiDSampler: Enhancing Video Interpolation Using Bidirectional Diffusion Sampler"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages