سؤال

Given the following ANSI C code, I wonder about the results:

main() {
  int a = 19.4 / 9.7;               // result = 2
  float b = (int) 19.4 / (int) 9.7  // result = 2.000000, why?
  float c = (int) 9.7;              // result = 9
}

I understood that C will cut off all decimal places on the conversion to int but my logic has a flaw if I see the second line. If the decimal places were cut, the result must be 2.11111.

How is the floating point conversion done in ANSI C?

هل كانت مفيدة؟

المحلول

In standard C, literals such as 19.4 and 9.7 are assumed to be double unless you specify otherwise (eg. 19.4F, 9.7F).

The compiler will use either the integer division function to compute x / y (if both x and y are of int (compatible) type) or the floating point division function to computer x / y if atleast one of x and y are a floating point type.

float b = (int) 19.4 / (int) 9.7 // result = 2.000000, why?

You are asking for 19.4 to be cast to int, and 9.7 to be cast to int, effectively asking the compiler to compute the integer division of 19/9 = 2, which is then promoted to float for storage in b. 2 becomes 2.0.

.PMCD.

نصائح أخرى

In line 2, you are converting the input values to integers, and then doing an integer divide (because both operands for the divide are ints, the divide is done in integer space - which means any fractional resiult will be truncated).

So, 19 / 9 = 2

Then, to get this integer into a float, it implicitly converts 2 back to 2.000000

No, it must be 2. When you divide an int by an int, you get an int. Thus 2. You can do the hokey-pokey afterwards converting it to whatever you like, but the decimals are already gone.

Divide two integers and you will get an integer. Divide a float and an integer and you will get a float.

(int) 19.4 / (int) 9.7 is equivalent to 19 / 9, which first gets resolved into 2 (an int). Only after the 2 is calculated is it turned into a float by the variable type.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top