1.2 is a double
; i.e. 64-bit double-precision floating point number.
1.2f is a float
; i.e. 32-bit single-precision floating point number.
In terms of performance, it doesn't matter as the compiler will convert literals from float
to double
and double
to float
as necessary. When assigning floating-point numbers from functions, however, you will most likely need to cast to avoid a compiler warning.