Caffe: What can I do if only a small batch fits into memory?
I am trying to train a very large model. Therefore, I can only fit a very small batch size into GPU memory. Working with small batch sizes results with very noisy gradient estimations.
What can I do to avoid this problem?
You can change the iter_size
in the solver parameters.
Caffe accumulates gradients over iter_size
x batch_size
instances in each stochastic gradient descent step.
So increasing iter_size
can also get more stable gradient when you cannot use large batch_size due to the limited memory.
As stated in this post, the batch size is not a problem in theory (the efficiency of stochastic gradient descent has been proven with a batch of size 1). Make sure you implement your batch correctly (the samples should be randomly picked over your data).