The only way to find out is to measure.
The "type1" variant isn't reliable or recommended in any way, since not all types can be constructed. Even worse, it allocates memory that will need to be garbage collector and invokes the object constructors.
For the remaining two options, on my machine "type3" is about twice as fast as "type1" in both debug and release modes. Remember that this is only true for my test - the results may not be true for other processor types, machine types, compilers, or .NET versions.
var sw = System.Diagnostics.Stopwatch.StartNew();
for (int i = 0; i < 10000000; i++)
{
var y = typeof(Program).ToString();
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
sw.Restart();
for (int i = 0; i < 10000000; i++)
{
var y = typeReference.ToString();
}
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
That said, it's a bit alarming this question is being asked without a clear requirement. If you noticed a performance problem, you'd likely have already profiled it and know which option was better. That tells me that this is likely premature optimization - you know the saying, "premature optimization is the root of all evil".
Programming code is not measured only by performance. It's also measured by correctness, developer productivity, and maintainability. Increasing the complexity of your code without a strong reason just transfers that cost to somewhere else. What might have been a non-issue has now turned into a serious loss of productivity, both now and for future maintainers of the application.
My recommendation would be to always use the "type1" variant. The measurement code I listed isn't a real world scenario. Caching typeof to a reference variable likely has a ton of side-effects, particularly around the way .NET loads assemblies. Rather than having them load only when needed, it might end up loading them all one every use of the application - turning a theoretical performance optimization into a very real performance problem.