Skip to content

Commit

Permalink
Clear cache as often on AMD as Nvidia.
Browse files Browse the repository at this point in the history
I think the issue this was working around has been solved.

If you notice that this change slows things down or causes stutters on
your AMD GPU with ROCm on Linux please report it.
  • Loading branch information
comfyanonymous committed Jan 2, 2025
1 parent 0f11d60 commit 9e9c8a1
Showing 1 changed file with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions comfy/model_management.py
Original file line number Diff line number Diff line change
Expand Up @@ -1121,9 +1121,8 @@ def soft_empty_cache(force=False):
elif is_ascend_npu():
torch.npu.empty_cache()
elif torch.cuda.is_available():
if force or is_nvidia(): #This seems to make things worse on ROCm so I only do it for cuda
torch.cuda.empty_cache()
torch.cuda.ipc_collect()
torch.cuda.empty_cache()
torch.cuda.ipc_collect()

def unload_all_models():
free_memory(1e30, get_torch_device())
Expand Down

0 comments on commit 9e9c8a1

Please sign in to comment.