This is an interesting question. I did some research, and the MS documentation on Decimal
Min/Max fields explicitly says (note the wording is the same for both; shown in brackets for clarity):
The value of this constant is [negative] 79,228,162,514,264,337,593,543,950,335.
The following code compiles with no problem:
public const decimal MaxValue = 79228162514264337593543950335M;
- Edit -
Note: here is the assignment of the field from the source for .NET 4.5 (downloaded PDB source from MS), the decimal constructor that is called by this code. Note that it is declaring a const value. It appears, at least for 4.5, that the documentation is wrong. (This would not be the first time MS documentation is incorrect). It also appears that the source code won't compile, as pointed out in a comment by @Daniel.
public const Decimal MinValue = new Decimal(-1, -1, -1, true, (byte) 0);
public const Decimal MaxValue = new Decimal(-1, -1, -1, false, (byte) 0);
public Decimal(int lo, int mid, int hi, bool isNegative, byte scale)
{
if ((int) scale > 28)
throw new ArgumentOutOfRangeException("scale", Environment.GetResourceString("ArgumentOutOfRange_DecimalScale"));
this.lo = lo;
this.mid = mid;
this.hi = hi;
this.flags = (int) scale << 16;
if (!isNegative)
return;
this.flags |= int.MinValue;
}
Also note: in the 2.0 framework, decimal is declared outright:
public const Decimal MaxValue = 79228162514264337593543950335m;
So, inconsistency and incorrect documentation is the conclusion I have reached. I will leave it to others to look at the other framework versions for a pattern.