Pregunta

why is the result different? If i use float it get ,675 and if i use double i get ,674... isnt that weird?

float f = 12345.6745;
double d = 12345.6745;

Locale l = Locale.forLanguageTag("es-ES");
DecimalFormat df = (DecimalFormat) NumberFormat.getInstance(l);
print(df.format(f));
>> 12.345,675

l = Locale.forLanguageTag("es-ES");
DecimalFormat df = (DecimalFormat) NumberFormat.getInstance(l);
print(df.format(d));
>> 12.345,674

Thanks

¿Fue útil?

Solución

If i use float it get ,675 and if i use double i get ,674... isnt that weird?

Not particularly. You're formatting different values. In particular, assuming you actually change your code so that it will compile (with an f suffix for the float), even though you're specifying 9 digits, float will only reliably represent 7.

Neither of the numbers is exactly 12345.6745. In fact, the exact values are:

f = 12345.6748046875
d = 12345.674499999999170540831983089447021484375

Look at those and it's obvious why the third decimal place is 5 for f and 4 for d.

If you want to preserve decimal digits, you should be considering using BigDecimal.

Otros consejos

The problem you have a representation error. This is more obvious when you have an overflow.

long l = 1234567890123456789L;
double d = l;
float f = l;
int i = (int) l;
short s = (short) l;
char ch = (char) l;
byte b = (byte) l;
System.out.println("l= " + l + " in hex " + Long.toHexString(l));
System.out.println("d= " + d);
System.out.println("f= " + f);
System.out.println("i= " + i + " in hex " + Integer.toHexString(i));
System.out.println("s= " + s + " in hex " + Integer.toHexString(s & 0xFFFF));
System.out.println("(int) ch= " + (int) ch +  " in hex " + Integer.toHexString(ch));
System.out.println("b= " + b +  " in hex " + Integer.toHexString(b));

prints

l= 1234567890123456789 in hex 112210f47de98115
d= 1.23456789012345677E18
f= 1.23456794E18
i= 2112454933 in hex 7de98115
s= -32491 in hex 8115
(int) ch= 33045 in hex 8115
b= 21 in hex 15

Only long can represent this value without error (plus BigInteger and BigDecimal) All other data types have different errors. float and double represent the top bits accurately whereas int, short, char and byte represent the lowest bits accurately.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top