Question

Whats the rationale for using signed numbers as indexes in .Net?

In Python, you can index from the end of an array by sending negative numbers, but this is not the case in .Net. It's not easy for .Net to add such a feature later as it could break other code perhaps using special rules (yeah, a bad idea, but I guess it happens) on indexing.

Not that I have ever have needed to index arrays over 2,147,483,647 in size, but I really cannot understand why they choose signed numbers.

Can it be because it's more normal to use signed numbers in code?

Edit: I just found these links:

The perils of unsigned iteration in C/C++

Signed word lengths and indexes

Edit2: Ok, a couple of other good reasons from the thread Matthew Flaschen posted:

  • Historical reasons as it's a c-like language
  • Interop with c
Was it helpful?

Solution

For simplicity of course. Do you like trouble doing size arithmetic with unsigned ints?

OTHER TIPS

It may be to the long tradition of using a value below 0 as an invalid index. Methods like String.IndexOf return -1 if the element is not found. Therefore, the return value must be signed. If index-consumers would require unsigned values, you would have to a) check and b) cast the value to use it. With signed indices, you just need the check.

Unsigned isn't CLS compliant.

The primary usefulness of unsigned numbers arises when composing larger numbers from smaller ones and vice versa. For example, if one receives four unsigned bytes from a connection and wishes to regard their value, taken as a whole, as a 32-bit integer, using unsigned types means one can simply say:

  value = byte0 | (byte1*256) | (byte2*65536) | (byte3*16777216);

By contrast, if the bytes were signed, an expression like the above would be more complicated.

I'm not sure I really see any reason for a language designed nowadays not to include unsigned versions of all types shorter than the longest signed integer type, with the semantics that all integer (meaning discrete-quantity-numerics, rather than any particular type) operations which will fit entirely within the largest signed type will by default be performed as though they were operating upon that type. Including an unsigned version of the largest signed type would complicate the language specification (since one would have to specify which operations must fit within range of the signed type, and which operations must fit within range of the unsigned type), but otherwise there should be no problem designing a language so that if (unsigned1 - unsigned2 > unsigned3) would yield a "numerically-correct" result even when unsigned2 is greater than unsigned1 [if one wants unsigned wraparound, one would explicitly specify if ((Uint32)(unsigned1 - unsigned2) > unsigned3)]. A language which specified such behavior would certainly be a big improvement over the mess that exist in C (justifiable, given its history), C#, or vb.net.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top