Which ordering of nested loops for iterating over a 2D array is more efficient [duplicate]

Which of the following orderings of nested loops to iterate over a 2D array is more efficient in terms of time (cache performance)? Why?

int a[100][100];

for(i=0; i<100; i++)
{
   for(j=0; j<100; j++)
   {
       a[i][j] = 10;    
   }
}

or

for(i=0; i<100; i++)
{
   for(j=0; j<100; j++)
   {
      a[j][i] = 10;    
   }
}

Solution 1:

The first method is slightly better, as the cells being assigned to lays next to each other.

First method:

[ ][ ][ ][ ][ ] ....
^1st assignment
   ^2nd assignment
[ ][ ][ ][ ][ ] ....
^101st assignment

Second method:

[ ][ ][ ][ ][ ] ....
^1st assignment
   ^101st assignment
[ ][ ][ ][ ][ ] ....
^2nd assignment

Solution 2:

  1. For array[100][100] - they are both the same, if the L1 cache is larger then 100*100*sizeof(int) == 10000*sizeof(int) == [usually] 40000. Note in Sandy Bridge - 100*100 integers should be enough elements to see a difference, since the L1 cache is only 32k.

  2. Compilers will probably optimize this code all the same

  3. Assuming no compiler optimizations, and matrix does not fit in L1 cache - the first code is better due to cache performance [usually]. Every time an element is not found in cache - you get a cache miss - and need to go to the RAM or L2 cache [which are much slower]. Taking elements from RAM to cache [cache fill] is done in blocks [usually 8/16 bytes] - so in the first code, you get at most miss rate of 1/4 [assuming 16 bytes cache block, 4 bytes ints] while in the second code it is unbounded, and can be even 1. In the second code snap - elements that were already in cache [inserted in the cache fill for the adjacent elements] - were taken out, and you get a redundant cache miss.

    • This is closely related to the principle of locality, which is the general assumption used when implementing the cache system. The first code follows this principle while the second doesn't - so cache performance of the first will be better of those of the second.

Conclusion: For all cache implementations I am aware of - the first will be not worse then the second. They might be the same - if there is no cache at all or all the array fits in cache completely - or due to compiler optimization.

Solution 3:

This sort of micro-optimization is platform-dependent so you'll need to profile the code in order to be able to draw a reasonable conclusion.

Solution 4:

In your second snippet, the change in j in each iteration produces a pattern with low spatial locality. Remember that behind the scenes, an array reference computes:

( ((y) * (row->width)) + (x) ) 

Consider a simplified L1 cache that has enough space for only 50 rows of our array. For the first 50 iterations, you will pay the unavoidable cost for 50 cache misses, but then what happens? For each iteration from 50 to 99, you will still cache miss and have to fetch from L2 (and/or RAM, etc). Then, x changes to 1 and y starts over, leading to another cache miss because the first row of your array has been evicted from the cache, and so forth.

The first snippet does not have this problem. It accesses the array in row-major order, which achieves better locality - you only have to pay for cache misses at most once (if a row of your array is not present in the cache at the time the loop starts) per row.

That being said, this is a very architecture-dependent question, so you would have to take into consideration the specifics (L1 cache size, cache line size, etc.) to draw a conclusion. You should also measure both ways and keep track of hardware events to have concrete data to draw conclusions from.

Solution 5:

Considering C++ is row major, I believe first method is going to be a bit faster. In memory a 2D array is represented in a Single dimension array and performance depends in accessing it either using row major or column major