How to clear Cuda memory in PyTorch
I figured out where I was going wrong. I am posting the solution as an answer for others who might be struggling with the same problem.
Basically, what PyTorch does is that it creates a computational graph whenever I pass the data through my network and stores the computations on the GPU memory, in case I want to calculate the gradient during backpropagation. But since I only wanted to perform a forward propagation, I simply needed to specify torch.no_grad()
for my model.
Thus, the for loop in my code could be rewritten as:
for i, left in enumerate(dataloader):
print(i)
with torch.no_grad():
temp = model(left).view(-1, 1, 300, 300)
right.append(temp.to('cpu'))
del temp
torch.cuda.empty_cache()
Specifying no_grad()
to my model tells PyTorch that I don't want to store any previous computations, thus freeing my GPU space.