Question

I have a large number of instances of a C structure like this:

struct mystruct
{
    /* ... */
    unsigned flag: 1;
    /* ... */
};
  • flag is initially 0 but must be 1 on exit from a certain function.

The simplest implementation is:

void set_flag(struct mystruct *sp)
{
    sp->flag = 1U;
}

But what is the likely effect on performance of doing this instead:

void set_flag(struct mystruct *sp)
{
    if (sp->flag == 0U)
    {
        sp->flag = 1U;
    }
}

I am hoping to avoid a write to main memory. The first version always does the write, the second version only performs the write if the flag was not already set, but in the vast majority of cases, the flag will already be set.

What other factors (e.g. branch prediction) are likely to affect performance?

I have seen a small speed increase so far, which I hope will become more significant as the data set becomes larger.

Is there a risk of this change making the program slower for large data sets, and if so in what circumstances might this happen?

Was it helpful?

Solution

The test before set does make a difference, but how much it is depends on your use-cases.

The data will end up in a cache-line in either case (e.g. just writing or test-and-set).

However, there is a difference if your cache line is tagged as dirty (e.g. modified) or clean. Dirty cache-lines have to be written back to main memory while clean cache-lines can just be forgotten and filled with new data.

Now consider that your code mangles huge amounts of data, and you access each chunk of data only once or twice. If so can assume that most of the memory accesses are cache misses. What happens if the majority of your cache-lines are dirty at the point where a cache miss occurs and the majority of cache lines are dirty?

They have to be written back to the main memory before new data will be loaded into the line. This is slower than just forgetting the content of a cache-line. Also it will double the memory bandwidth between the cache and the main memory.

That may not make a difference for once CPU core since memory is fast these days, but another CPU will (hopefully) do some other work as well. You can be sure that the other CPU core will execute everything a tad faster if the buss is not busy moving cache-lines in and out.

In short: keeping your cache-lines clean will half that bandwidth requirement and makes cache-misses a tad cheaper.

Regarding the branch: Sure: It's costly, but a cache-miss is much worse! Also if you're lucky the CPU will use it's out of order execution features to offset cache misses with the costs of the branch.

If you really want to get the best possible performance out of this code, and if most of your accesses are cache-misses you have two options:

  • Bypass the cache: The x86 architecture has non temporal loads and stores for this purpose. They're hidden somewhere in the SSE instruction sets and can be used from the c-language via intrinsics.

  • (Only for experts): Use some lines of inline-assembler that replaces the test-and-set function with assembler that uses the CMOV (conditional move) instruction. This will not only keep your cache-lines clean but avoid the branch. Now CMOV is a slow instruction and will only outperform a branch if the branches cannot be predicted. So you'll better benchmark your code.

OTHER TIPS

This is an interesting question, and Nils' answer about cache lines is definitely great advice.

I'd like to emphasize the importance of profiling code to measure real performance -- can you measure how frequently that flag will already be set in the data that you encounter? Performance could change a lot depending on the answer.

Just for fun, I used your code to run a little comparison of set versus test-then-set on a 50-million element array filled with various proportions of 1's. Here's a graph:

comparison of set vs. test-then-set
(source: natekohl.net)

This is just a toy example, of course. But note the non-linear performance -- which I wasn't expecting -- and that test-then-set becomes faster than plain set when the array is almost entirely filled with 1's.

These are my interpretations of your requirement,

  • you have the flag initialized separately
  • it is set only once (to 1) and not reset after that
  • But, this set attempt will be made many times on the same flag
  • And, you have a lot these flag instances (each needing the same kind of processing)

Assuming that,

  • space optimization is weighted quite lower than time optimization,

I suggest the following things.

  • Firstly, on 32-bit systems it helps to use 32-bit integers if you are worried about access times
  • If you skip a check on the flag 'word', the write will be quite fast. But, given that you have a very large number of flags which you will keep checking and set if not already set, it would be better to keep the conditional check in.
  • But, having said that, if your platform does parallel operations, (for example, a write to the disk can be sent off in parallel to your code execution usually) it would be worthwhile skipping the check.

This optimization will likely not cause speed decrease when moving to a larger dataset.

Cache thrashing when reading values will be the same, branch prediction penalties will also be the same and these are the key factors to optimize for here.

Branch prediction stores history per branch instruction so it doesn't care how many instances you have as long as you branch on them with instructions at different addresses (inlined function for example). If you have a single function entity (not inlined) you will have one branch instruction for all and this will suppress branch prediction making it miss more often and increasing the penalties.

You could always profile, but I am pretty sure the first version is both faster and less obscure.

Either approach will require that the data is loaded into the cache, so your only saving will be a difference between a read/write and a write.

I don't see how this change could make your code slower with larger data sets, so you're likely safe enough on that front.

It smells a little like a premature-optimistaion to me. (Unless your profiling has identified this as a bottleneck)

As with all things performance related the best way to be sure of the effect of a code change is to measure it. You should be able to create a large amount of test data relatively easily.

If you're really worried about the time performance, change the flag to a full int instead of a bitfield. Then setting it is just a write and not a read-write as with bitfields.

But as already pointed out, this smells of micro-optimization.

The test before set doesn't make any sense - code without test is cleaner and also a bit faster.

As a side note - it makes sense to inline functions like this because overhead on function call is bigger then function body, although optimising compiler should do it without second thought.

Since nobody else said it, I will.

Why are you using a bit-field at all? The layout will differ from compiler to compiler, so they are useless for interfaces. They may, or may not, be more space efficient; the compiler might just decide to shove them into a 32-bit field so as to pad thing efficiently. There are no guarantees they are faster, and in fact they are likely to be slower.

I've banned their usage at work. Unless someone can give me a convincing reason that they provide any extra capability, it's not worth playing with them.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top