Question

Possible Duplicate:
Why does .Net Framework not use unsigned data types?

In The C# Programming Language (Covering C# 4.0) (4th Edition), 1.3 Types and Variables, Page 9.

Jon Skeet says;

Hooray for byte being an unsigned type! The fact that in Java a byte is signed (and with no unsigned equivalent) makes a lot of bit-twiddling pointlessly error-prone. It’s quite possible that we should all be using uint a lot more than we do, mind you: I’m sure many developers reach for int by default when they want an integer type. The framework designers also fall into this category, of course: Why should String.Length be signed?

When I decompile String.Length;

/// <summary>
    /// Gets the number of characters in the current <see cref="T:System.String"/> object.
    /// </summary>
    /// 
    /// <returns>
    /// The number of characters in the current string.
    /// </returns>
    /// <filterpriority>1</filterpriority>
    [__DynamicallyInvokable]
    public int Length { [SecuritySafeCritical, __DynamicallyInvokable, MethodImpl(MethodImplOptions.InternalCall)] get; }

Also in MSDN;

The Length property returns the number of Char objects in this instance, not the number of Unicode characters. The reason is that a Unicode character might be represented by more than one Char.

It returns the number of character objects in the current string. Why does String.Length return Int32 when there already is a type called UInt32? What is the point of the signed-unsigned byte and String.Length?

Was it helpful?

Solution 2

Int32 is widely considered to be the "general-purpose" integer in the .NET framework. If you need a general number that is not fractional, Int32 is what you reach for first. You would only use one of the unsigned types if you had a specific reason to do so.

Using int for all Count properties creates a consistent API, and allows for the possibility of using -1 as a flag value (which some APIs in the framework do).

From http://blogs.msdn.com/b/brada/archive/2003/09/02/50285.aspx:

The general feeling among many of us is that the vast majority of programming is done with signed types. Whenever you switch to unsigned types you force a mental model switch (and an ugly cast). In the worst cast you build up a whole parallel world of APIs that take unsigned types.

OTHER TIPS

Because all .NET core objects are CLS compliant. And CLS has no unsigned data types.

Check

Why are unsigned int's not CLS compliant?

for the details.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top