Cuda out of memory when upscaling. This is a completely original implementation designed specifically for Distorch memory management I can upscale images from 512 to 1024 without issues typically, but when I try to go to 2048 is when I get a CUDA memory error. Of the allocated memory 0 bytes is allocated by PyTorch, and 0 bytes is reserved by PyTorch but unallocated. g. Tested solutions that actually work for RTX 4090, 3080, and cloud GPUs in 2025. GPU 0 has a total capacity of 14. 56 GiB already allocated; 280. 00 MiB (GPU 0; 8. 27 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. Anyone else getting an out of memory error with LDSR upscaling? It definitely worked for me last week but now even small resolutions are giving me an out-of-memory error. You can use Aiarty Image Enhancer as the best companion for Stable Diffusion, upscaling to large resolution in batch in faster speed and better quality.