I had the same question. I prefer piece of code to be checked by default in my company's code, because overflow side effects can cost a lot and be hard to diagnose. Discovering the real reason of those side effects can be every valuable.
The question is, what do we lose in terms of performance?
Here is a very simple bench :
static void Main(string[] args)
{
long c = 0;
var sw = new Stopwatch();
sw.Start();
unchecked
{
for (long i = 0; i < 500000000; i++) c += 1;
}
sw.Stop();
Console.WriteLine("Unchecked: " + sw.ElapsedMilliseconds);
c = 0;
sw.Restart();
checked
{
for (long i = 0; i < 500000000; i++) c += 1;
}
sw.Stop();
Console.WriteLine("Checked: " + sw.ElapsedMilliseconds);
}
In the generated IL, I see that the checked and unchecked keyword determines whether the add.ovf or add instruction will be used. (both for Debug and Release configs)
IL_001c: ldloc.2
IL_001d: ldc.i4.1
IL_001e: conv.i8
IL_001f: add
IL_0066: ldloc.2
IL_0067: ldc.i4.1
IL_0068: conv.i8
IL_0069: add.ovf
Results (x64 host)
Debug
- Unchecked: 2371
- Checked: 2437
Release
- Unchecked: 2088
- Checked: 2266
Other results by replacing long(s) by an int(s) (x64 host)
Debug
- Unchecked: 1665
- Checked: 1568
Release
- Unchecked: 189
- Checked: 566
The performance hit is there, it looks like it is more important to choose the right variable type that to go checked or unchecked. Anyway it doesn't change my opinion. I'll turn on the "Check for arithmetic overflow/underflow" in all our projects! (Advanced Build Settings).
When in need of performance, I'll simply use an unchecked block.