Question

What are the implicit types for numbers in C? If, for example, I have a decimal number in a calculation, is the decimal always treated as a double? If I have a non-decimal number, is it always treated as an int? What if my non-decimal number is larger than an int value?

I'm curious because this affects type conversion and promotion. For instance, if I have the following calculation:

float a = 1.0 / 25;

Is 1.0 treated as a double and 25 treated as an int? Is 25 then promoted to a double, the calculation performed at double precision and then the result converted to a float?

What about:

double b = 1 + 2147483649;   // note that the number is larger than an int value
Was it helpful?

Solution

If the number has neither a decimal point nor an exponent, it is an integer of some sort; by default, an int.

If the number has a decimal point or an exponent, it is a floating point number of some sort; by default, a double.

That's about it. You can append suffixes to numbers (such as ULL for unsigned long long) to specify the type more precisely. Otherwise (simplifying a little), integers are the smallest int type (of type int or longer) that will hold the value.

In your examples, the code is:

float a = 1.0 / 25;
double b = 1 + 2147483649;

The value of a is calculated by noting that 1.0 is a double and 25 is an integer. When processing the division, the int is converted to a double, the calculation is performed (producing a double), and the result is then coerced into a float for assignment to a. All of this can be done by the compiler, so the result will be pre-computed.

Similarly, on a system with 32-bit int, the value 214783649 is too big to be an int, so it will be treated as a signed type bigger than int (either long or long long); the 1 is added (yielding the same type), and then that value is converted to a double. Again, it is all done at compile time.

These computations are governed by the same rules as other computations in C.


The type rules for integer constants are detailed in §6.4.4.1 Integer constants of ISO/IEC 9899:1999. There's a table which details the types depending on the suffix (if any) and the type of constant (decimal vs octal or hexadecimal). For decimal constants, the value is always a signed integer; for octal or hexadecimal constants, the type can be signed or unsigned as required, and as soon as the value fits. Thanks to Daniel Fischer for pointing out my mistake.

OTHER TIPS

http://en.wikipedia.org/wiki/Type_conversion

The standard has a general guideline for what you can expect but compilers have a superset of rules that encompass the standard as well as rules for optimizing. The above link discusses some of the the generalities you can expect. If you are concerned about the implicit coercion it is typically good practice to use explicit casting.

Keep in mind that the size of the primitive types is not guaranteed.

1.0 / 25

Evaluates to a double because one of the operands is a double. If you changed it to 1/25 the evaluation is performed as two integers and evaluates to 0.

double b = 1 + 2147483649;

The right side is evaluated as an integer and then coerced to a double during assignment.

actually. in your example you may get a compiler warning. You'd either write 1.0f to make it a float to start with, or explicitly cast your result before assigning it.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top