Question

I have an application that streams through 250 MB of data, applying a simple and fast neural-net threshold function to the data chunks (which are just 2 32-bit words each). Based on the result of the (very simple) compute, the chunk is unpredictably pushed into one of 64 bins. So it's one big stream in and 64 shorter (variable length) streams out.

This is repeated many times with different detection functions.

The compute is memory bandwidth limited. I can tell this because there's no speed change even if I use a discriminant function that's much more computationally intensive.

What is the best way to structure the writes of the new streams to optimize my memory bandwidth? I am especially thinking that understanding cache use and cache line size may play a big role in this. Imagine the worst case where I have my 64 output streams and by bad luck, many map to the same cache line. Then when I write the next 64 bits of data to a stream, the CPU has to flush out a stale cache line to main memory, and load in the proper cache line. Each of those uses 64 BYTES of bandwidth... so my bandwidth limited application may be wasting 95% of the memory bandwidth (in this hypothetical worst case, though).

It's hard to even try to measure the effect, so designing ways around it is even more vague. Or am I even chasing a ghost bottleneck that somehow the hardware optimizes better than I could?

I'm using Core II x86 processors if that makes any difference.

Edit: Here's some example code. It streams through an array and copies its elements to various output arrays picked pseudo-randomly. Running the same program with different numbers of destination bins gives different runtimes, even though the same amount of computation and memory reads and writes were done:

2 output streams: 13 secs
8 output streams: 13 secs
32 output streams: 19 secs
128 output streams: 29 seconds
512 output streams: 47 seconds

The difference between using 512 versus 2 output streams is 4X, (probably??) caused by cache line eviction overhead.

#include <stdio.h>
#include <stdlib.h>
#include <ctime>

int main()
{
  const int size=1<<19;
  int streambits=3;
  int streamcount=1UL<<streambits; // # of output bins
  int *instore=(int *)malloc(size*sizeof(int));
  int **outstore=(int **)malloc(streamcount*sizeof(int *));
  int **out=(int **)malloc(streamcount*sizeof(int));
  unsigned int seed=0;

  for (int j=0; j<size; j++) instore[j]=j;

  for (int i=0; i< streamcount; ++i) 
    outstore[i]=(int *)malloc(size*sizeof(int));

  int startTime=time(NULL);
  for (int k=0; k<10000; k++) {
    for (int i=0; i<streamcount; i++) out[i]=outstore[i];
    int *in=instore;

    for (int j=0; j<size/2; j++) {
      seed=seed*0x1234567+0x7162521;
      int bin=seed>>(32-streambits); // pseudorandom destination bin
      *(out[bin]++)=*(in++);
      *(out[bin]++)=*(in++);
    }

  }
  int endTime=time(NULL);
  printf("Eval time=%ld\n", endTime-startTime);
}
Was it helpful?

Solution

As you're writing to the 64 output bins, you'll be using many different memory locations. If the bins are filled essentially at random, it means that you'll sometimes have two bins that couls share the same cache line. Not a big problem; the Core 2 L1 cache is 8-way associative. That means you'd get a problem only with the 9th cache line. With just 65 live memory references at any time (1 read/64 write), 8-way associative is OK.

The L2 cache is apparently 12-way associative (3/6MB total, so 12 isn't that weird a number). So even if you'd have collisions in L1, chances are pretty good you're still not hitting main memory.

However, if you don't like this, re-arrange the bins in memory. Instead of stroing each bin sequentially, interleave them. For bin 0, store chunks 0-15 at offsets 0-63, but store chunks 16-31 at offset 8192-8255. For bin 1, store chunks 0-15 at offsets 64-127, etcetera. This takes just a few bit shifts and masks, but the result is that a pair of bins share 8 cache lines.

Another possible way to speed up your code in this case is SSE4, especially in x64 mode. You'd get 16 registers x 128 bits, and you can optimize the read (MOVNTDQA) to limit cache pollution. I'm not sure if that will help a lot with the read speed, though - I'd expect the Core2 prefetcher to catch this. Reading sequential integers is the most simple kind of access possible, any prefetcher should optimize that.

OTHER TIPS

Do you have the option of writing your output streams as a single stream with inline metadata to identify each 'chunk'? If you were to read a 'chunk,' run your threshhold function on it, then instead of writing it to a particular output stream you would just write which stream it belonged to (1 byte) followed by the original data, you'd seriously reduce your thrashing.

I would not suggest this except for the fact that you have said that you have to process these data many times. On each successive run, you read your input stream to get the bin number (1 byte) then do whatever you need to do for that bin on the next 8 bytes.

As far as the cacheing behavior of this mechanism, since you are only sliding through two streams of data and, in all but the first case, writing as much data as you are reading, the hardware will give you all the help you could possibly hope for as far as prefetching, cache line optimization, etc.

If you had to add that extra byte every time you processed your data, your worst case cache behavior is the average case. If you can afford the storage hit, it seems like a win to me.

Here are some ideas if you really get desperate...

You might consider upgrading hardware. For streaming applications somewhat similar to yours, I've found I got a big speed boost by changing to an i7 processor. Also, AMD processors are supposedly better than Core 2 for memory-bound work (though I haven't used them recently myself).

Another solution you might consider is doing the processing on a graphics card using a language like CUDA. Graphics cards are tuned to have very high memory bandwidth and to do fast floating point math. Expect to spend 5x to 20x the development time for CUDA code relative to a straight-forward non-optimized C implementation.

You might want to explore to map the files into memory. This way the kernel can take care of the memory management for you. The kernel usually knows best how to handle page caches. This is especially true if your application needs to run on more than one platform, as the different Oses handle memory management in different ways.

There are frameworks like ACE (http://www.cs.wustl.edu/~schmidt/ACE.html) or Boost (http://www.boost.org) That allow you to write code that does memory mapping in a platform independent way.

The real answer for situations like this is to code up several approaches and time them. Which you have obviously done. All folks like me can do is suggest other approaches to try.

For example: even in the absence of cache thrashing (your output streams mapping to the same cache lines), if you are writing size ints, with size = 1<<19 and sizeof(int)=4, 32-bits - i.e. if you are writing 8MB of data, you are actually reading 8MB and then writing 8MB. Because if your data is in ordinary WB (WriteBack) memory on an x86 processor, to write to a line you first have to read the old copy of the line - even though you are going to throw the data read away.

You can eliminate this unnecessary-RFO read traffic by (a) using WC memory (probably a pain to set up) or (b) using SSE streaming stores, aka NT (Non-Temporal) Stores. MOVNT* - MOVNTQ, MOVNTPS, etc. (There's also a MOVNTDQA streaming load, although more painful to use.)

I rather like this paper I just found by googling http://blogs.fau.de/hager/2008/09/04/a-case-for-the-non-temporal-store/

Now: MOVNT* apply to WB memory but work like WC memory, using a small number of write cmbining buffers. The actual number varies by processor model: there were only 4 on the first Intel chip to have them, P6 (aka Pentium Pro). Ooof... Bulldozer's 4K WCC (Write Combining Cache) basically provides 64 write combining buffers, per http://semiaccurate.com/forums/showthread.php?t=6145&page=40, although there are only 4 classic WC buffers. But http://www.intel.com/content/dam/doc/manual/64-ia-32-architectures-optimization-manual.pdf says that some processos have 6 WC buffers, and some 8. Anyway ... there are a few, but not that many. Usually not 64.

But here is something that you could try: implement write combining yourself.

a) write to a single set of 64 (#streams)buffers, each of size 64B (cache line size), - or maybe 128 or 256B. Let these buffers be in ordinary WB memory. You can access them with ordinary stores, although if you can use MOVNT*, great.

When one of these buffers gets full, copy it as a burst to the place in memory where the stream really is supposed to go. Using MOVNT* streaming stores.

This will end up doing * N bytes stored to the temporary buffers, hitting the L1 cache * 64*64 bytes read to fill the temporary buffers * N bytes read from the temporary buffers, hitting the L1 cache. * N bytes written via streaming stores - basically going straight to memory.

I.e N bytes cache hit read + N bytes cache hit write + N bytes cache miss

versus N bytes cache miss read + N bytes cache write read.

Reducing the N bytes of cache miss read may moe than make up for the extra overhead.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top