Question

I was checking out the Java 8 standard library source code just out of curiosity, and found this in java/lang/Object.java. There are three methods named wait:

  • public final native void wait(long timeout): This is the core of all wait methods, which has a native implementation.
  • public final void wait(): Just calls wait(0).
  • And then there is public final void wait(long timeout, int nanos).

The JavaDoc for the particular method tells me:

This method is similar to the wait method of one argument, but it allows finer control over the amount of time to wait for a notification before giving up. The amount of real time, measured in nanoseconds, is given by:

1000000*timeout+nanos

But this is how the method achieves "finer control over the amount of time to wait":

if (nanos >= 500000 || (nanos != 0 && timeout == 0)) {
    timeout++;
}

wait(timeout);

So this method basically does a crude rounding of nanoseconds to milliseconds. Not to mention that anything below 0.5ms will still be rounded up to 1ms.

Is this piece of code bad/unnecessary, or am I missing some unseen virtue of declaring this method, and its no argument cousin as the way they are?

Was it helpful?

Solution

I think what happens is that sub-millisecond accuracy is around the corner and they are thinking forward.

Since a new method is needed, instead of going for microseconds the designers decided to go straight to nanoseconds. Not because there is a need today for this level of accuracy, but to be ready for the future.

The implementation that rounds the time to milliseconds is there for compatibility. You will be able to take advantage of the new method on faster computers and JVMs, while the same code will sill run on older JVMs that map the call to millisecond accuracy.

Licensed under: CC-BY-SA with attribution
scroll top