Question

I try to construct a big Int64 with nibble information stored in bytes.

The following lines of code work as expected:

Console.WriteLine("{0:X12}", (Int64)(0x0d * 0x100000000));
Console.WriteLine("{0:X12}", (Int64)(0x0d * 0x1000000));
Console.WriteLine("{0:X12}", (Int64)(0x0d * 0x100000));

Why does the following line lead to a compile error CS0220 "The operation overflows at compile time in checked mode" and the others do not?

Console.WriteLine("{0:X12}", (Int64)(0x0d * 0x10000000));

The result would be:

FFFFFFFFD0000000

instead of:

0000D0000000

Can anyone explain this? I will now convert with shift operators, but still curious why this approach does not work!

Update: The error also occurs when using (Int64)(0x0d << 28).

Was it helpful?

Solution

You need to mark the constant values specifically as longs (Int64s) or possibly ulongs (UInt64s), otherwise by default they will be interpreter as ints (i.e. Int32s), clearly causing overflows. Casting after the multiplication operation won't do you any good in this case.

Haven't tested, but this code should work:

Console.WriteLine("{0:X12}", 0x0dL * 0x100000000L);
Console.WriteLine("{0:X12}", 0x0dL * 0x1000000L);
Console.WriteLine("{0:X12}", 0x0dL * 0x100000L);

OTHER TIPS

Integer literals have type Int32 (even if in hex). Use the "L" suffix to make them longs (Int64).

Console.WriteLine("{0:X12}", 0x0dL * 0x100000000L);

First glance I'd guess that it's doing the multiplication with a Int32, and overflowing. You need to cast the individual operands to Int64 then multiply those. Right now you're only casting the result.

That said, I don't know why it would only find issue with that one line and not the first one with the larger number.

ND

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top