The documentation for BigDecimal is silent about how floatValue()
rounds. I presume it uses round-to-nearest, ties-to-even.
left
and right
are set to .99 and .97, respectively. When these are converted to double
in round-to-nearest mode, the results are 0.9899999999999999911182158029987476766109466552734375 (in hexadecimal floating-point, 0x1.fae147ae147aep-1) and 0.9699999999999999733546474089962430298328399658203125 (0x1.f0a3d70a3d70ap-1). When those are subtracted, the result is 0.020000000000000017763568394002504646778106689453125, which clearly exceeds .02.
When .99 and .97 are converted to float
, the results are 0.9900000095367431640625 (0x1.fae148p-1) and 0.9700000286102294921875 (0x1.f0a3d8p-1). When those are subtracted, the result is 0.019999980926513671875, which is clearly less than .02.
Simply put, when a decimal numeral is converted to floating-point, the rounding may be up or down. It depends on where the number happens to lie relative to the nearest representable floating-point values. If it is not controlled or analyzed, it is practically random. Thus, sometimes you end up with a greater value than you might have expected, and sometimes you end up with a lesser value.
Using double
instead of float
would not guarantee that results similar to the above do not occur. It is merely happenstance that the double
value in this case exceeded the exact mathematical value and the float
value did not. With other numbers, it could be the other way around. For example, with double
, .09-.07
is less than .02, but, with float
, .09f - .07f` is greater than .02.
There is a lot of information about how to deal with floating-point arithmetic, such as Handbook of Floating-Point Arithmetic. It is too large a subject to cover in Stack Overflow questions. There are university courses on it.
Often on today’s typical processors, there is little extra expense for using double
rather than float
; simple scalar floating-point operations are performed at nearly the same speeds for double
and float
. Performance differences arise when you have so much data that the time to transfer them (from disk to memory or memory to processor) becomes important, or the space they occupy on disk becomes large, or your software uses SIMD features of processors. (SIMD allows processors to perform the same operation on multiple pieces of data, in parallel. Current processors typically provide about twice the bandwidth for float
SIMD operations as for double
SIMD operations or do not provide double
SIMD operations at all.)