Yes, the compiler is optimising it.
It knows that a struct can never be null, so the result of casting it to an object can never be null - so it will just set b
to false in the first sample. In fact, if you use Resharper, it will warn you that the expression is always false.
For the second of course, a nullable can be null so it has to do the check.
(You can also use Reflector
to inspect the compiler-generated IL code to verify this.)
The original test code is not good because the compiler knows that the nullable struct will always be null and will therefore also optimize away that loop. Not only that, but in a release build the compiler realises that b
is not used and optimizes away the entire loop.
To prevent that, and to show what would happen in more realistic code, test it like so:
using System;
using System.Diagnostics;
namespace ConsoleApplication1
{
internal class Program
{
private static void Main(string[] args)
{
bool b = true;
MyStruct? s1 = getNullableStruct();
Stopwatch sw = Stopwatch.StartNew();
for (int i = 0; i < 10000000; i++)
{
b &= (object)s1 == null; // Note: Redundant cast to object.
}
Console.WriteLine(sw.Elapsed);
MyStruct s2 = getStruct();
sw.Restart();
for (int i = 0; i < 10000000; i++)
{
b &= (object)s2 == null;
}
Console.WriteLine(sw.Elapsed);
}
private static MyStruct? getNullableStruct()
{
return null;
}
private static MyStruct getStruct()
{
return new MyStruct();
}
}
public struct MyStruct {}
}