While working with DecimalFormat
, I was puzzled by its behavior when rounding numbers with fixed-point patterns. To make things more specific:
double n = 2082809.080589735D;
// Output the exact value of n
System.out.println("value: " + new BigDecimal(n).toPlainString());
System.out.println("double: " + n);
DecimalFormat format = new DecimalFormat("0.00000000");
System.out.println("format: " + format.format(n));
System.out.println("format (BD): " + format.format(new BigDecimal(n)));
The output of this snippet is:
value: 2082809.080589734949171543121337890625
double: 2082809.080589735
format: 2082809.08058974
format (BD): 2082809.08058973
From the first output line we notice that the actual value is below the half-way point between 2082809.08058973
and 2082809.08058974
(...49...
). Despite this, DecimalFormat
rounds the value upwards when provided with a double
argument.
Other values are rounded downwards:
value: 261285.2738465850125066936016082763671875
double: 261285.273846585
format: 261285.27384658
format (BD): 261285.27384659
This does not happen in all cases:
value: 0.080589734949171543121337890625
double: 0.08058973494917154
format: 0.08058973
format (BD): 0.08058973
value: 0.2738465850125066936016082763671875
double: 0.2738465850125067
format: 0.27384659
format (BD): 0.27384659
It seems to me that the formatted string for a double
is rounded using half-even rounding based on an imprecise decimal value produced with something along the lines of Double.toString()
, rather than the actual mathematical value represented by the double
in question. When the formatted precision is very close to (or more than) the precision provided by the double
type, things start becoming somewhat random.
In all the cases presented above, formatting the corresponding BigDecimal
seems to perform the rounding as expected.
I was unable to find any specification describing the proper behavior of DecimalFormat
in this case.
Is this behavior documented somewhere?
From a correctness point of view, wouldn't rounding the actual mathematical value be preferable?
I understand that people writing 1.0...35
would (naively?) expect it to be rounded to 1.0...4
, but 1.0...35
may not even be representable in any primitive data type available in Java...
EDIT:
I have submitted a report to Oracle - hopefully they will be able to either fix or clarify this issue.