-
Notifications
You must be signed in to change notification settings - Fork 14
Description
Request: Pretrained Model Weights
Hi, thank you for the great work on AnyDepth!
I noticed that the README references a Model page on Hugging Face, but it appears that no pretrained weights are currently available there.
Could you please release the pretrained checkpoints for the SDT decoder trained with the different encoder backbones (ViT-S, ViT-B, ViT-L)? Specifically, the weights used for the zero-shot depth estimation results reported in the paper would be very useful for the community.
For context, I am currently working on my Bachelor's thesis on monocular depth estimation, where I am comparing and benchmarking several state-of-the-art MDE models. AnyDepth looks like a very promising candidate to include in my evaluation — particularly the SDT decoder's efficiency advantages over DPT — and I would love to test it on standard benchmarks (NYUv2, KITTI) as well as on edge hardware (Jetson).
Thank you for considering this request!