Question

When using an iPhone Objective C method that accepts CGFloats, e.g. [UIColor colorWithRed:green:blue:], is it important to append a f to constant arguments to specifiy them explicitly as floats, e.g. should I always type 0.1f rather than 0.1 in such cases? Or does the compiler automatically cast 0.1 (which is a double in general) to 0.1f (which is a float) at compile time? I don't wish to have these casts happen at run time because they would unneccessarily hog performance.

Thanks in advance

MrMage

Was it helpful?

Solution

It's not important; it won't break anything to use a double-precision constant where a single-precision constant is expected.

However, if you have turned on the warning about implicit 64-bit-to-32-bit conversions and are building for 32-bit architectures (which I believe includes the iPhone), then you'll want to use single-precision constants simply to avoid getting that warning.

(Alternatively, you could set that setting to explicitly off, with an architecture condition turning it on for 64-bit architectures. But that currently only matters if you're also using some of your code in a Mac application.)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top