Question

I was looking through a Findbugs report on my code base and one of the patterns that was triggered was for an empty synchronzied block (i.e. synchronized (var) {}). The documentation says:

Empty synchronized blocks are far more subtle and hard to use correctly than most people recognize, and empty synchronized blocks are almost never a better solution than less contrived solutions.

In my case it occurred because the contents of the block had been commented out, but the synchronized statement was still there. In what situations could an empty synchronized block achieve correct threading semantics?

Was it helpful?

Solution

An empty synchronized block will wait until nobody else is using that synchronizer. That may be what you want, but because you haven't protected the subsequent code in the synchronized block, nothing is stopping somebody else from modifying what ever it was you were waiting for while you run the subsequent code. That's almost never what you want.

OTHER TIPS

The earlier answers fail to underline the most useful thing about empty synchronized blocks: they can ensure the visibility of variable changes and other actions across threads. As jtahlborn indicates, synchronization imposes a "memory barrier" on the compiler that forces it to flush and refresh its caches. But I didn't find where “SnakE discusses” this, so I wrote an answer myself.

int variable;

void test() // This code is INCORRECT
{
    new Thread( () ->  // A
    {
        variable = 9;
        for( ;; )
        {
            // Do other stuff
        }
    }).start();

    new Thread( () ->  // B
    {
        for( ;; )
        {
            if( variable == 9 ) System.exit( 0 );
        }
    }).start();
}

The above program is incorrect. The value of the variable might be cached locally in thread A or B or both. So B might never read the value of 9 that A writes, and might therefore loop forever.

Make a variable change visible across threads by using empty synchronized blocks

One possible correction is to add a volatile (effectively "no cache") modifier to the variable. Sometimes this is inefficient, however, because it totally forbids caching of the variable. Empty synchronized blocks, on the other hand, do not forbid caching. All they do is force the caches to synchronize with main memory at certain critical points. For example: *

int variable;

void test() // Corrected version
{
    new Thread( () ->  // A
    {
        variable = 9;
        synchronized( o ) {} // Flush to main memory
        for( ;; )
        {
            // Do other stuff
        }
    }).start();

    new Thread( () ->  // B
    {
        for( ;; )
        {
            synchronized( o ) {} // Refresh from main memory
            if( variable == 9 ) System.exit( 0 );
        }
    }).start();
}

final Object o = new Object();

How the memory model guarantees visibility

Both threads must synchronize on the same object in order to guarantee visibility. This guarantee rests on the Java memory model, in particular on the rule that an "unlock action on monitor m synchronizes-with all subsequent lock actions on m" and thereby happens-before those actions. So A's unlock of o's monitor at the tail of its synchronized block happens-before B's subsequent lock at the head of its block. (Note, it's this strange tail-head order of the relation that explains why the bodies can be empty.) Given also that A's write precedes its unlock and B's lock precedes its read, the relation must extend to cover both write and read: write happens-before read. It's this crucial, extended relation that makes the revised program correct in terms of the memory model.

I think this is the most important use for empty synchronized blocks.


   * I speak as though it were a matter of processor caching because I think that’s a helpful way of viewing it. In truth, as Aleksandr Dubinsky has commented, ‘all modern processors are cache-coherent. The happens-before relationship is more about what the compiler is allowed to do rather than the CPU.’

It used to be the case that the specification implied certain memory barrier operations occurred. However, the spec has now changed and the original spec was never implemented correctly. It may be used to wait for another thread to release the lock, but coordinating that the other thread has already acquired the lock would be tricky.

Synchronizing does a little bit more than just waiting, while inelegant coding this could achieve the effect required.

From http://www.javaperformancetuning.com/news/qotm030.shtml

  1. The thread acquires the lock on the monitor for object this (assuming the monitor is unlocked, otherwise the thread waits until the monitor is unlocked).
  2. The thread memory flushes all its variables, i.e. it has all of its variables effectively read from "main" memory (JVMs can use dirty sets to optimize this so that only "dirty" variables are flushed, but conceptually this is the same. See section 17.9 of the Java language specification).
  3. The code block is executed (in this case setting the return value to the current value of i3, which may have just been reset from "main" memory).
  4. (Any changes to variables would normally now be written out to "main" memory, but for geti3() we have no changes.)
  5. The thread releases the lock on the monitor for object this.

For an in depth look into Java's memory model, have a look at this video from Google's 'Advanced topics in programming languages' series: http://www.youtube.com/watch?v=1FX4zco0ziY

It gives a really nice overview of what the compiler can (often in theory, but sometimes in practice) do to your code. Essential stuff for any serious Java programmer!

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top