Hello,
I am encountering a "CUDA out of memory" error when applying the Spitfire filter to large images. I was wondering if there is a way to process these images iteratively or by splitting them into smaller parts while maintaining their original size and resolution.
Specifically, is it possible to send smaller batches to CUDA for processing without downscaling the image? I have already tried clearing the CUDA cache and setting max_split_size_mb: 21, but this did not resolve the issue.
I would appreciate any insights you can provide. Thank you for your help and your excellent work in image analysis!
Best regards,
Roman Hudeček