Why the CUDA memory is not release with torch.cuda.empty_cache()
Solution 1:
At least in Ubuntu, your script does not release memory when it is run in the interactive shell and works as expected when running as a script. I think there are some reference issues in the in-place call. The following will work in both the interactive shell and as a script.
import torch
a = torch.zeros(300000000, dtype=torch.int8)
a = a.cuda()
del a
torch.cuda.empty_cache()
Solution 2:
I meet the same issue. Solution:
cuda = torch.device('cuda')
a.to(cuda)