Well, the specification says (I change to x
and y
for less confusion here):
• Otherwise, if y has a type Y and an implicit conversion exists from x to Y, the result type is Y. At run-time, x is first evaluated. If x is not null, x is unwrapped to type X0 (if X exists and is nullable) and converted to type Y, and this becomes the result. Otherwise, y is evaluated and becomes the result.
This happens. First, the left-hand side x
, which is just a
, is checked for null
. But it is not null
in itself. Then the left-hand side is to be used. The implicit conversion is then run. Its result of type B
is ... null
.
Note that this is different from:
A a = new A();
B b = (B)a ?? new B();
In this case the left operand is an expression (x
) which is null
in itself, and the result becomes the right-hand side (y
).
Maybe implicit conversions between reference types should return null
(if and) only if the original is null
, as a good practice?
I guess the guys who wrote the spec could have done it like this (but did not):
• Otherwise, if y has a type Y and an implicit conversion exists from x to Y, the result type is Y. At run-time, x is first evaluated and converted to type Y. If the output of that conversion is not null, that output becomes the result. Otherwise, y is evaluated and becomes the result.
Maybe that would have been more intuitive? It would have forced the runtime to call your implicit conversion no matter if the input to the conversion were null
or not. That should not be too expensive if typical implementations quickly determined that null → null
.