Question

I'm witnessing a strange behavior in a .net program :

Console.WriteLine(Int64.MaxValue.ToString());
// displays 9223372036854775807, which is 2^63-1, as expected

Int64 a = 256*256*256*127; // ok

Int64 a = 256*256*256*128; // compile time error : 
//"The operation overflows at compile time in checked mode"
// If i do this at runtime, I get some negative values, so the overflow indeed happens.

Why do my Int64's behaves as if they were Int32's, although Int64.MaxValue seems to confirm they're using 64 bits ?

If it's relevant, I'm using a 32 bit OS, and the target platform is set to "Any CPU"

Was it helpful?

Solution

Your RHS is only using Int32 values, so the whole operation is performed using Int32 arithmetic, then the Int32 result is promoted to a long.

Change it to this:

Int64 a = 256*256*256*128L;

and all will be well.

OTHER TIPS

Use:

Int64 a = 256L*256L*256L*128L;

the L suffix means Int64 literal, no suffix means Int32.

What your wrote:

Int64 a = 256*256*256*128

means:

Int64 a = (Int32)256*(Int32)256*(Int32)256*(Int32)128;
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top