Why is there a significant difference in this C++ for loop's execution time?

C++PerformanceNested Loops

C++ Problem Overview


I was going through loops and found a significant difference in accessing loops. I can't understand what is the thing that causes such difference in both cases?

First Example:

Execution Time; 8 seconds

for (int kk = 0; kk < 1000; kk++)
{
	sum = 0;
	for (int i = 0; i < 1024; i++)
	    for (int j = 0; j < 1024; j++)
	    {
		    sum += matrix[i][j];
	    }
}

Second Example:

Execution Time: 23 seconds

for (int kk = 0; kk < 1000; kk++)
{
	sum = 0;
	for (int i = 0; i < 1024; i++)
	    for (int j = 0; j < 1024; j++)
	    {
		    sum += matrix[j][i];
	    }
}

What causes so much execution time difference just exchanging

matrix[i][j] 

to

matrix[j][i]

?

C++ Solutions


Solution 1 - C++

It's an issue of memory cache.

matrix[i][j] has better cache hits than matrix[j][i], since matrix[i][j] has more continuous memory accessing chances.

For example, when we access matrix[i][0], the cache may load a continuous segment of memory containing matrix[i][0], thus, accessing matrix[i][1], matrix[i][2], ..., will benefit from caching speed, since matrix[i][1], matrix[i][2], ... are near to matrix[i][0].

However, when we access matrix[j][0], it is far from matrix[j - 1][0] and may not been cached, and can not benefit from caching speed. Especially, a matrix is normally stored as a continuous big segment of memory, and the cacher may predicate the behavior of memory accessing and always cache the memory.

That's why matrix[i][j] is faster. This is typical in CPU cache based performance optimizing.

Solution 2 - C++

The difference in performance is caused by the caching strategy of the computer.

The 2 dimensional array matrix[i][j] is represented as a long list of values in the memory.

E.g the array A[3][4] looks like:

1 1 1 1   2 2 2 2   3 3 3 3

In this example every entry of A[0][x] is set to 1, every entry of A[1][x] set to 2, ...

If your first loop is applied to this matrix the order of access is this:

1 2 3 4   5 6 7 8   9 10 11 12

While the second loops access order looks like this:

1 4 7 10  2 5 8 11  3 6 9 12

When the program accesses an element of the array it also loads subsequent elements.

E.g. if you access A[0][1], A[0][2] and A[0][3] are loaded too.

Thereby the first loop has to do less load operations, as some elements are already in the cache when needed. The second loop loads entries into the cache that are not needed at the time, resulting in more load operations.

Solution 3 - C++

Other people have done a good job explaining why one form of your code makes more efficient use of the memory cache than the other. I'd like to add some background information you may not be aware of: you probably don't realize just how expensive main memory accesses are nowadays.

The numbers posted in this question look to be in the right ballpark to me, and I'm going to reproduce them here because they're so important:

Core i7 Xeon 5500 Series Data Source Latency (approximate)
L1 CACHE hit, ~4 cycles
L2 CACHE hit, ~10 cycles
L3 CACHE hit, line unshared ~40 cycles
L3 CACHE hit, shared line in another core ~65 cycles
L3 CACHE hit, modified in another core ~75 cycles remote
remote L3 CACHE ~100-300 cycles
Local Dram ~60 ns
Remote Dram ~100 ns

Note the change in units for the last two entries. Depending exactly which model you have, this processor runs at 2.9–3.2 GHz; to make the math simpler, let's just call it 3 GHz. So one cycle is 0.33333 nanoseconds. So DRAM accesses are also 100-300 cycles.

The point is that the CPU could have executed hundreds of instructions in the time it takes to read one cache line from main memory. This is called the memory wall. Because of it, efficient use of the memory cache is more important than any other factor in overall performance on modern CPUs.

Solution 4 - C++

The answer depends a little bit on exactly how the matrix is defined. In a fully dynamically allocated array, you'd have:

T **matrix;
matrix = new T*[n];
for(i = 0; i < n; i++)
{
   t[i] = new T[m]; 
}

So, every matrix[j] will require a new memory lookup for the pointer. If you do the j loop outside, the inner loop can re-use the pointer for matrix[j] every for the whole inner loop.

If the matrix is a simple 2D array:

T matrix[n][m];

then the matrix[j] will simply be a multiplication by 1024 * sizeof(T) - which can be done by adding 1024 * sizeof(T) the loop index in the optimised code, so should be relatively fast either way.

On top of that, we have cache locality factors. Caches have "lines" of data that is typically 32 to 128 bytes per line. So if your code reads address X, the cache will load up with values 32 to 128 bytes around X. So if the NEXT thing you need is only sizeof(T) forward from the current location, it's highly likely already in the cache [and modern processors also detects that you are going round in a loop reading every memory location, and pre-loads the data].

In the case of j inner loop, you are reading a new location of sizeof(T)*1024 distance for each loop [or possiblya greater distance if it's dynamically allocated]. This means that the data being loaded will not be useful for the next loop, because it's not in the next 32 to 128 bytes.

And finally, it's entirely possible that the first loop is more optimised, thanks to SSE instructions or similar, which allow the calculation to be run even quicker. But this is probably marginal for such a large matrix, as the performance is highly memory bound at this size.

Solution 5 - C++

Memory hardware is not optimized to deliver individual addresses: instead it tends to operate on larger chunks of continuous memory called cache lines. Every time you read one entry of your matrix, the entire cache line it lies in also gets loaded into cache along with it.

The faster loop ordering is set up to read memory in order; each time you load a cache line, you use all of the entries in that cache line. Each pass through the outer loop, you read each matrix entry only a single time.

The slower loop ordering, however, only uses a single entry from each cache line before moving on. Thus, each cache line has to get loaded multiple times, once for each matrix entry in the line. e.g. if a double is 8 byes and a cache line is 64 bytes long, then each pass through the outer loop has to read each matrix entry eight times rather than a single time.


All that said, if you had turned optimizations on, you would probably see no difference: optimizers understand this phenomenon, and good ones are capable of recognizing that they can swap which loop is the inner loop and which loop is the outer loop for this particular code snippet.

(also, a good optimizer would have only done one pass through the outermost loop, because it recognizes the first 999 times through are irrelevant to the final value of sum)

Solution 6 - C++

The matrix is stored in memory as a vector. Accessing it the first way accesses the memory sequentially. Accessing it the second way requires jumping around memory locations. See http://en.wikipedia.org/wiki/Row-major_order

Solution 7 - C++

If you access j - i the j dimension is cached so the machine code does not have to change it every time, the second dimension isnt cached, so you actually delete the cache every time what causes the difference.

Solution 8 - C++

Based on the concept of locality of reference, it is very likely for a piece of code to access memory locations that are adjacent. So more values are loaded into the cache than what is asked for. This means more cache hits. Your first example is satisfying this well while you code in second example is not.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionMassabView Question on Stackoverflow
Solution 1 - C++Peixu ZhuView Answer on Stackoverflow
Solution 2 - C++H4korView Answer on Stackoverflow
Solution 3 - C++zwolView Answer on Stackoverflow
Solution 4 - C++Mats PeterssonView Answer on Stackoverflow
Solution 5 - C++user1084944View Answer on Stackoverflow
Solution 6 - C++JamesView Answer on Stackoverflow
Solution 7 - C++EtixppView Answer on Stackoverflow
Solution 8 - C++chandra_cstView Answer on Stackoverflow