How to interpret Poolallocator messages in tensorflow?

While training a tensorflow seq2seq model I see the following messages :

W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 27282 get requests, put_count=9311 evicted_count=1000 eviction_rate=0.1074 and unsatisfied allocation rate=0.699032
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:239] Raising pool_size_limit_ from 100 to 110
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 13715 get requests, put_count=14458 evicted_count=10000 eviction_rate=0.691659 and unsatisfied allocation rate=0.675684
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:239] Raising pool_size_limit_ from 110 to 121
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 6965 get requests, put_count=6813 evicted_count=5000 eviction_rate=0.733891 and unsatisfied allocation rate=0.741421
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:239] Raising pool_size_limit_ from 133 to 146
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 44 get requests, put_count=9058 evicted_count=9000 eviction_rate=0.993597 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 46 get requests, put_count=9062 evicted_count=9000 eviction_rate=0.993158 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 4 get requests, put_count=1029 evicted_count=1000 eviction_rate=0.971817 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 2 get requests, put_count=1030 evicted_count=1000 eviction_rate=0.970874 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 44 get requests, put_count=6074 evicted_count=6000 eviction_rate=0.987817 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 12 get requests, put_count=6045 evicted_count=6000 eviction_rate=0.992556 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 2 get requests, put_count=1042 evicted_count=1000 eviction_rate=0.959693 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 44 get requests, put_count=6093 evicted_count=6000 eviction_rate=0.984737 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 4 get requests, put_count=1069 evicted_count=1000 eviction_rate=0.935454 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 17722 get requests, put_count=9036 evicted_count=1000 eviction_rate=0.110668 and unsatisfied allocation rate=0.550615
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:239] Raising pool_size_limit_ from 792 to 871
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 6 get requests, put_count=1093 evicted_count=1000 eviction_rate=0.914913 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 6 get requests, put_count=1101 evicted_count=1000 eviction_rate=0.908265 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 3224 get requests, put_count=4684 evicted_count=2000 eviction_rate=0.426985 and unsatisfied allocation rate=0.200062
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:239] Raising pool_size_limit_ from 1158 to 1273
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 17794 get requests, put_count=17842 evicted_count=9000 eviction_rate=0.504428 and unsatisfied allocation rate=0.510228
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:239] Raising pool_size_limit_ from 1400 to 1540
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 31 get requests, put_count=1185 evicted_count=1000 eviction_rate=0.843882 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 40 get requests, put_count=8209 evicted_count=8000 eviction_rate=0.97454 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 0 get requests, put_count=2272 evicted_count=2000 eviction_rate=0.880282 and unsatisfied allocation rate=-nan
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 0 get requests, put_count=2362 evicted_count=2000 eviction_rate=0.84674 and unsatisfied allocation rate=-nan
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 38 get requests, put_count=5436 evicted_count=5000 eviction_rate=0.919794 and unsatisfied allocation rate=0

What does it mean , does it mean I am having some resource allocation issues? Am running on Titan X 3500+ CUDA ,12 GB GPU


Solution 1:

TensorFlow has multiple memory allocators, for memory that will be used in different ways. Their behavior has some adaptive aspects.

In your particular case, since you're using a GPU, there is a PoolAllocator for CPU memory that is pre-registered with the GPU for fast DMA. A tensor that is expected to be transferred from CPU to GPU, e.g., will be allocated from this pool.

The PoolAllocators attempt to amortize the cost of calling a more expensive underlying allocator by keeping around a pool of allocated then freed chunks that are eligible for immediate reuse. Their default behavior is to grow slowly until the eviction rate drops below some constant. (The eviction rate is the proportion of free calls where we return an unused chunk from the pool to the underlying pool in order not to exceed the size limit.) In the log messages above, you see "Raising pool_size_limit_" lines that show the pool size growing. Assuming that your program actually has a steady state behavior with a maximum size collection of chunks it needs, the pool will grow to accommodate it, and then grow no more. It behaves this way rather than simply retaining all chunks ever allocated so that sizes needed only rarely, or only during program startup, are less likely to be retained in the pool.

These messages should only be a cause for concern if you run out of memory. In such a case the log messages may help diagnose the problem. Note also that peak execution speed may only be attained after the memory pools have grown to the proper size.