Question

Inspired by this question, I wrote the test:

public class Main {

    private static final long TEST_NUMBERS = 5L;

    private static final long ITERATION_NUMBER = 100000L;

    private static long value;

    public static void main(final String [] args) throws Throwable {
        for(int i=0; i<TEST_NUMBERS; i++) {
            value = 0;
            final Thread incrementor = new Thread(new Incrementor());
            final Thread checker = new Thread(new Checker());
            incrementer.start();
            checker.start();
            checker.join();
            incrementer.join();
        }
    }

    static class Incrementor implements Runnable {
        public void run() {
            for(int i=0; i<ITERATION_NUMBER; i++){
                ++value;
            }
        }
    }

    static class Checker implements Runnable {
        public void run() {
            long nonEqualsCount = 0;
            for(int i=0; i<ITERATION_NUMBER; i++){
                if(value != value) {
                    ++nonEqualsCount;
                }
            }
            System.out.println("nonEqualsCount = " + nonEqualsCount);
        }
    }
}

This program are printed in common case:

nonEqualsCount = 12; //or other non 0 value;
nonEqualsCount = 0;
nonEqualsCount = 0;
nonEqualsCount = 0;
nonEqualsCount = 0;

First: I explain this behaviour is presence of the JIT-compiler. JIT-compiler cache value non volatile field for each thread after "warmup". It right?

Second: If first right or not right, how can I verify this?

P.S. - I know about PrintAssebly-option.

Update: enviroment: Windows 7 64bit, JDK 1.7.0_40-b43(Hot Spot).

Was it helpful?

Solution 2

What you see is probably an artifact of the JIT. Before it kicks in, the Java byte code is interpreted which means there are a lot of chances for the checker thread to get interrupted during the comparison.

Also, since more code is executed, there is a higher chance that the CPU caches will need flushing.

When the code is optimized by the JIT, it will probably insert 64bit operations and since only small amounts of code are being executed, the caches won't be flushed to main memory anymore which means that the threads have no chance to see the changes made by the other one.

OTHER TIPS

Incrementing long variable is not atomic ( is 64-bit large). In condition (value != value) : It can happen that between reading value of value, first thread can change value. volatile type is connected with visibility. Non-volatile variables values can be stale. So your first conclusion seems to be correct.

On the first pass of your program these statements may be correct:

This code may be able to demonstrate that the increment operation (++value) on a variable of type long (and also int) is not atomic. This furthermore may also be able to demonstrate that the != operation is not thread safe if not used within a synchronized block. But that has nothing to do with the datatype used.

Your observation that something changed after the first pass is also correct, but for instance if you are using the Oracle/SUN JVM then the implementation of this very JIT-Complier ("Hotspot Engine") is dependant on the technical architecture it is runing on.

So it is hard to say and to verify that the JIT-Compiler is responsible for it. Trying to deduce implementation details of the JIT-Complier/Hotspot engine using this approach is quite an empirical research method. Your observation may for instance vary when switching from Solaris to Windows.

Here is a link to implementation details of the Hotspot engine: http://www.oracle.com/technetwork/java/javase/tech/index-jsp-136373.html

To produce further empirical results you may for instance try to revert the JVM to run in the classic mode or to reduce the amount of optimizations in the JVM (client mode?). If the behaviour changes then this may be another indicator for the correctness of your theory.

Anyways: I am curious what your findings are nevertheless :-)

While you are right that this is caused by the JIT, it has nothing to do with volatile.

Some JITs do internal optimizations on-the-fly and remove unnecessary code to speed-up things, and this is exactly what is happening here. The JIT determines that the comparison value != value is always false and entirely removes the whole block of code. It furthermore can determine, that this for loop is now running empty and removes the entire loop as well. As a result this will be the final optimized checker class:

public void run() {
  System.out.println("nonEqualsCount = 0");
}

You can verify this by measuring the time it takes this thread for execution on each pass. On the first pass it might take some time to finish, on the second it will be only a handful of nanoseconds for the println.

Note: As a general rule you cannot expect the JIT to do anything. Based on the actual implementation, hardware and other factors it might or might not optimize your code. And if it does optimize, the result is equally impossible to determine, as for example code might be optimized much earlier on slow hardware than on fast hardware.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top