Actually, when a number has a decimal point like 23.54 the default interpretation is that it's a double, and it's encoded as a 64-bit floating point number. If you put an f
at the end 23.54f
, then it's encoded as a 32-bit floating pointer number. Putting an L
at the end declares that the number is a long double, which is encoded as a 128-bit floating point number.
In most cases, you don't need to add a suffix to a number because the compiler will determine the correct size based on context. For example, in the line
float x = 23.54;
the compiler will interpret 23.54 as a 64-bit double, but in the process of assigning that number to x
, the compiler will automatically demote the number to a 32-bit float.
Here's some code to play around with
NSLog( @"%lu %lu %lu", sizeof(typeof(25.43f)), sizeof(typeof(25.43)), sizeof(typeof(25.43L)) );
int x = 100;
float y = x / 200;
NSLog( @"%f", y );
y = x / 200.0;
NSLog( @"%f", y );
The first NSLog displays the number of bytes for the various types of numeric constants. The second NSLog should print 0.000000 since the number 200 is interpreted as in integer, and integer division truncates to an integer. The last NSLog should print 0.500000 since 200.0 is interpreted as a double.