Pregunta

The normal implementation of Math.abs(x) (as implemented by Oracle) is given by

public static double abs(double a) {
  return (a <= 0.0D) ? 0.0D - a : a;
}

Isn't it faster to just set the one bit coding for the sign of the number to zero (or one)? I suppose that there is only one bit coding the sign of the number, and that it is always the same bit, but I may be wrong in this.

Or are our computers generally unfit to do operations on single bits with atomary instructions?

If a faster implementation is possible, can you give it?

edit:

It has been pointed out to me that Java code is platform independent, and as such it cannot depend on what are the atomary instructions of single machines. To optimize code, however, the JVM hotspot optimizer does consider the specifics of the machine, and will maybe apply the very optimization under consideration.

Through a simple test, however, I have found that at least on my machine, the Math.abs function doesn't seem to get optimized to a single atomary instructions. My code was as follows:

    long before = System.currentTimeMillis();
    int o = 0;
    for (double i = 0; i<1000000000; i++)
        if ((i-500)*(i-500)>((i-100)*2)*((i-100)*2)) // 4680 ms
            o++;
    System.out.println(o);
    System.out.println("using multiplication: "+(System.currentTimeMillis()-before));
    before = System.currentTimeMillis();
    o = 0;
    for (double i = 0; i<1000000000; i++)
        if (Math.abs(i-500)>(Math.abs(i-100)*2)) // 4778 ms
            o++;
    System.out.println(o);
    System.out.println("using Math.abs: "+(System.currentTimeMillis()-before));

Which gives me the following output:

234
using multiplication: 4985
234
using Math.abs: 5587

Supposing that multiplication is performed by an atomary instruction, it seems that at least on my machine the JVM hotspot optimizer doesn't optimize the Math.abs function to a single instruction operation.

¿Fue útil?

Solución

My first thought was, it’s because of NaN (Not-a-number) values, i.e. if the input is NaN it should get returned without any change. But this seems to be not a requirement as harold’s test has shown that the JVM’s internal optimization does not preserve the sign of NaNs (unless you use StrictMath).

The documentation of Math.abs says:

In other words, the result is the same as the value of the expression: Double.longBitsToDouble((Double.doubleToLongBits(a)<<1)>>>1)

So the option of bit manipulations was known to the developers of this class but they decided against it.

Most probably, because optimizing this Java code makes no sense. The hotspot optimizer will replace its invocation with the appropriate FPU instruction once it encountered it in a hotspot, in most environments. This happens with a lot of java.lang.Math methods as well as Integer.rotateLeft and similar methods. They might have a pure Java implementation but if the CPU has an instruction for it, it will be used by the JVM.

Otros consejos

I'm not a java expert, but I think the problem is that this definition is expressible in the language. Bit operations on floats are machine format specific, so not portable, and thus not allowed in Java. I'm not sure if any of the jit compilers will do the optimization.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top