Question

This is a follow up question. So, Java store's integers in two's-complements and you can do the following:

int ALPHA_MASK = 0xff000000;

In C# this requires the use of an unsigned integer, uint, because it interprets this to be 4278190080 instead of -16777216.

My question, how do declare negative values in hexadecimal notation in c#, and how exactly are integers represented internally? What are the differences to Java here?

Was it helpful?

Solution

C# (rather, .NET) also uses the two's complement, but it supports both signed and unsigned types (which Java doesn't). A bit mask is more naturally an unsigned thing - why should one bit be different than all the other bits?

In this specific case, it is safe to use an unchecked cast:

int ALPHA_MASK = unchecked((int)0xFF000000);

To "directly" represent this number as a signed value, you write

int ALPHA_MASK = -0x1000000; // == -16777216

Hexadecimal is not (or should not) be any different from decimal: to represent a negative number, you need to write a negative sign, followed by the digits representing the absolute value.

OTHER TIPS

Well, you can use an unchecked block and a cast:

unchecked
{
    int ALPHA_MASK = (int)0xff000000;
}

or

int ALPHA_MASK = unchecked((int)0xff000000);

Not terribly convenient, though... perhaps just use a literal integer?

And just to add insult to injury, this will work too:

-0x7F000000
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top