Question

I have two arrays: A with N_A random integers and B with N_B random integers between 0 and (N_A - 1). I use the numbers in B as indices into A in the following loop:

for(i = 0; i < N_B; i++) {
    sum += A[B[i]];
}

Experimenting on an Intel i7-3770, N_A = 256 million, N_B = 64 million, this loop takes only .62 seconds, which corresponds to a memory access latency of about 9 nanoseconds.

As this latency is too small, I was wondering if the hardware prefetcher is playing a role. Can someone offer an explanation?

Was it helpful?

Solution 2

The CPU charges ahead in the instruction stream and will juggle multiple outstanding loads at once. The stream looks like this:

load b[0]
load a[b[0]]
add
loop code

load b[1]
load a[b[1]]
add
loop code

load b[1]
load a[b[1]]
add
loop code

...

The iterations are only serialized by the loop code, which runs quickly. All loads can run concurrently. Concurrency is just limited by how many loads the CPU can handle.

I suspect you wanted to benchmark random, unpredictable, serialized memory loads. This is actually pretty hard on a modern CPU. Try to introduce an unbreakable dependency chain:

int lastLoad = 0;
for(i = 0; i < N_B; i++) {
    var load = A[B[i] + (lastLoad & 1)]; //be sure to make A one element bigger
    sum += load;
    lastLoad = load;
}

This requires the last load to be executed until the address of the next load can be computed.

OTHER TIPS

The HW prefetcher can see through your first level of indirection (B[i]) since these elements are sequential. It's capable of issuing multiple prefetches ahead, so you could assume that the average access into B would hit the caches (either L1 or L2). However, there's no way that the prefetcher can predict random addresses (the data stored in B) and prefetch the correct elements from A. You still have to perform a memory access in almost all accesses to A (disregarding occasional lucky cache hits due to reuse of lines)

The reason you see such low latency is that the accesses into A are non serialized, the CPU can access multiple elements of A simultaneously, so the time doesn't just accumulate. In fact, you measure memory BW here, checking how long it takes to access 64M elements overall, not memory latency (how long it takes to access a single element).

A reasonable "snapshot" of the CPU memory unit should show several outstanding requests - a few accesses into B[i], B[i+64], ... (the intermediate accesses should simply get merged as each request fetches a 64Byte line), all of which would probably be prefetches reflecting future values of i, intermixed with random accesses to A elements according to the previously fetched elements of B.

To measure latency, you need each access to depends on the result of the previous one, for e.g. by making the content of each element in A the index of the next access.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top