Why does (int)(1.0 / x) where x = 0 results in In32.MinValue rather than Int32.MaxValue?

StackOverflow https://stackoverflow.com/questions/21417489

  •  04-10-2022
  •  | 
  •  

Question

In Java,

int x = 0;
(int)(-1.0 / x) -> Integer.MinValue
(int)(1.0 / x) -> Integer.MaxValue

But in C#,

int x = 0;
(int)(-1.0 / x) -> Int32.MinValue
(int)(1.0 / x) -> Int32.MinValue!!

The behavior would be the same if the "unchecked" statement/operator is used, and if "checked" is used instead then it's an overflow exception.

But surely, in an unchecked context, one would except 1.0 / x (where x = 0) to result in Int32.MaxValue, not Int32.MinValue.

Am I missing something?

Était-ce utile?

La solution

One shouldn't expect anything, really. From the C# specification, section 6.2.1 (emphasis mine):

For a conversion from float or double to an integral type [...]. - In an unchecked context, the conversion always succeeds, and proceeds as follows. - If the value of the operand is NaN or infinite, the result of the conversion is an unspecified value of the destination type.

Compare that with the Java specification, section 5.1.3:

A narrowing conversion of a floating-point number to an integral type T takes two steps:

In the first step, the floating-point number is converted either to a long, if T is long, or to an int, if T is byte, short, char, or int, as follows:

  • If the floating-point number is NaN (§4.2.3) [...], the result of the first step of the conversion is an int or long 0.
  • Otherwise, if the floating-point number is not an infinity [...]
  • Otherwise, one of the following two cases must be true:
    • The value must be too small (a negative value of large magnitude or negative infinity), and the result of the first step is the smallest representable value of type int or long.
    • The value must be too large (a positive value of large magnitude or positive infinity), and the result of the first step is the largest representable value of type int or long.

So basically, the two languages make different guarantees, and the implementations appear to both satisfy those guarantees.

I imagine that because of the looser specification, the .NET JIT is able to use a more efficient conversion, which happens to give int.MinValue as the result.

Autres conseils

The behaviour in C# is undefined. Quoting from the C# Language Specification (and this answer):

For a conversion from float or double to an integral type, the processing depends on the overflow checking context (§7.6.12) in which the conversion takes place. In an unchecked context: If the value of the operand is NaN or infinite, the result of the conversion is an unspecified value of the destination type.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top