Pregunta

Ok this is merely curiosity, serves no real world help.

I know that with expression trees you can generate MSIL on the fly just like the regular C# compiler does. Since compiler can decide optimizations, I'm tempted to ask what is the case with IL generated during Expression.Compile(). Basically two questions:

  1. Since at compile time the compiler can produce different (may be slightly) IL in debug mode and release mode, is there ever a difference in the IL generated by compiling an expression when built in debug mode and release mode?

  2. Also JIT which convert IL to native code at run time should be vastly different in both debug mode and release mode. Is this also the case with compiled expressions? Or are IL from expression trees not jitted at all?

My understanding could be flawed, correct me in case.

Note: I'm considering the cases where the debugger is detached. I'm asking about the default configuration setting that comes with "debug" and "release" in visual studio.

¿Fue útil?

Solución

Since at compile time the compiler can produce different (may be slightly) IL in debug mode and release mode, is there ever a difference in the IL generated by compiling an expression when built in debug mode and release mode?

This one actually has a very simple answer: no. Given two identical LINQ/DLR expression trees, there will be no difference in the generated IL if one is compiled by an application running in Release mode, and the other in Debug mode. I'm not sure how that would be implemented anyway; I don't know of any reliable way for code within System.Core to know that your project is running a debug build or release build.

This answer may actually be misleading, however. The IL emitted by the expression compiler may not differ between debug and release builds, but in cases where expression trees are emitted by the C# compiler, it is possible that the structure of the expression trees themselves may differ between debug and release modes. I am fairly well acquainted with the LINQ/DLR internals, but not so much with the C# compiler, so I can only say that there may be a difference in those cases (and there may not).

Also JIT which convert IL to native code at run time should be vastly different in both debug mode and release mode. Is this also the case with compiled expressions? Or are IL from expression trees not jitted at all?

The machine code that the JIT compiler spits out will not necessarily be vastly different for pre-optimized IL versus unoptimized IL. The results may well be identical, particularly if the only differences are a few extra temporary values. I suspect the two will diverge more in larger and more complex methods, as there is usually an upper limit to the time/effort the JIT will spend optimizing a given method. But it sounds like you are more interested in how the quality of compiled LINQ/DLR expression trees compares to, say, C# code compiled in debug or release mode.

I can tell you that the LINQ/DLR LambdaCompiler performs very few optimizations--fewer than the C# compiler in Release mode for sure; Debug mode may be closer, but I would put my money on the C# compiler being slightly more aggressive. The LambdaCompiler generally does not attempt to reduce the use of temporary locals, and operations like conditionals, comparisons, and type conversions will typically use more intermediate locals than you might expect. I can actually only think of three optimizations that it does perform:

  1. Nested lambdas will be inlined when possible (and "when possible" tends to be "most of the time"). This can help a lot, actually. Note, this only works when you Invoke a LambdaExpression; it does not apply if you invoke a compiled delegate within your expression.

  2. Unnecessary/redundant type conversions are omitted, at least in some cases.

  3. If the value of a TypeBinaryExpression (i.e., [value] is [Type]) is known at compile time, that value may be inlined as a constant.

Apart from #3, the expression compiler does no "expression-based" optimizations; that is, it will not analyze the expression tree looking for optimization opportunities. The other optimizations in the list occur with little or no context about other expressions in the tree.

Generally, you should assume that the IL resulting from a compiled LINQ/DLR expression is considerably less optimized than the IL produced by the C# compiler. However, the resulting IL code is eligible for JIT optimization, so it is difficult to assess the real world performance impact unless you actually try to measure it with equivalent code.

One of the things to keep in mind when composing code with expression trees is that, in effect, you are the compiler1. LINQ/DLR trees are designed to be emitted by some other compiler infrastructure, like the various DLR language implementations. It's therefore up to you to handle optimizations at the expression level. If you are a sloppy compiler and emit a bunch of unnecessary or redundant code, the generated IL will be larger and less likely to be aggressively optimized by the JIT compiler. So be mindful of the expressions you construct, but don't fret too much. If you need highly optimized IL, you should probably just emit it yourself. But in most cases, LINQ/DLR trees perform just fine.


1 If you have ever wondered why LINQ/DLR expressions are so pedantic about requiring exact type matching, it's because they are intended to serve as a compiler target for multiple languages, each of which may have different rules regarding method binding, implicit and explicit type conversions, etc. Therefore, when constructing LINQ/DLR trees manually, you must do the work that a compiler would normally do behind the scenes, like automatically inserting code for implicit conversions.

Otros consejos

Squaring an int.

I am not sure if this shows very much, but I came up with the following example:

// make delegate and find length of IL:
Func<int, int> f = x => x * x;
Console.WriteLine(f.Method.GetMethodBody().GetILAsByteArray().Length);

// make expression tree
Expression<Func<int, int>> e = x => x * x;

// one approach to finding IL length
var methInf = e.Compile().Method;
var owner = (System.Reflection.Emit.DynamicMethod)methInf.GetType().GetField("m_owner", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance).GetValue(methInf);
Console.WriteLine(owner.GetILGenerator().ILOffset);

// another approach to finding IL length
var an = new System.Reflection.AssemblyName("myTest");
var assem = AppDomain.CurrentDomain.DefineDynamicAssembly(an, System.Reflection.Emit.AssemblyBuilderAccess.RunAndSave);
var module = assem.DefineDynamicModule("myTest");
var type = module.DefineType("myClass");
var methBuilder = type.DefineMethod("myMeth", System.Reflection.MethodAttributes.Static);
e.CompileToMethod(methBuilder);
Console.WriteLine(methBuilder.GetILGenerator().ILOffset);

Results:

In Debug configuration the length of the compile-time method is 8, while the length of the emitted method is 4.

In Release configuration the length of the compile-time method is 4, while the length of the emitted method is also 4.

The compile-time method as seen by IL DASM in Debug mode:

.method private hidebysig static int32  '<Main>b__0'(int32 x) cil managed
{
  .custom instance void [mscorlib]System.Runtime.CompilerServices.CompilerGeneratedAttribute::.ctor() = ( 01 00 00 00 ) 
  // Code size       8 (0x8)
  .maxstack  2
  .locals init ([0] int32 CS$1$0000)
  IL_0000:  ldarg.0
  IL_0001:  ldarg.0
  IL_0002:  mul
  IL_0003:  stloc.0
  IL_0004:  br.s       IL_0006
  IL_0006:  ldloc.0
  IL_0007:  ret
}

and Release:

.method private hidebysig static int32  '<Main>b__0'(int32 x) cil managed
{
  .custom instance void [mscorlib]System.Runtime.CompilerServices.CompilerGeneratedAttribute::.ctor() = ( 01 00 00 00 ) 
  // Code size       4 (0x4)
  .maxstack  8
  IL_0000:  ldarg.0
  IL_0001:  ldarg.0
  IL_0002:  mul
  IL_0003:  ret
}

Disclaimer: I am not sure if one can conclude anything (this is a long "comment"), but maybe the Compile() always takes place with "optimizations"?

Regarding IL

As other answers have pointed out, detecting debug/release at runtime is not really a 'thing' because it's a compile-time decision that is controlled by project configuration, not something that's really detectable in the built assembly. The runtime could reflect the AssemblyConfiguration attribute on the assembly, checking its Configuration property - but that would be an inexact solution for something so fundamental to .Net - because that string can literally be anything.

Moreover, that attribute can't be guaranteed to exist within an assembly and since we can mix and match release/debug assemblies in the same process it's practically impossible to say 'this is a debug/release process'.

Finally, as others have mentioned, DEBUG != UNOPTIMISED - The concept of a 'debuggable' assembly is more about conventions than anything else (reflected in the default compilation settings for a .Net project) - conventions which control the detail in a PDB (not existence of one, by the way), and whether code is optimised or not. As such, it's possible to have an optimised debug assembly, as well as an unoptimised release assembly, and even an optimised release assembly with full PDB info which can be debugged just the same as a standard 'debug' assembly.

Also - the expression tree compiler translates, pretty much directly, the expressions within a lambda to IL (except for some nuances, like redundant downcasts from a derived reference type to a base reference type) and so the IL that is generated is as optimised as the expression tree you've written. So it's unlikely that the IL is different between a Debug/Release build because there is, effectively, no such thing as a Debug/Release process, only an assembly and, as mentioned above, there's no reliable way to detect that.

But what about the JIT?

When it comes to the JIT translating the IL into assembler, however, I think it's worth noting that the JIT (although not sure about .Net core) does behave differently if a process is started with a debugger attached vs. when started without. Try starting a release build with F5 from VS and compare debugging behaviour vs attaching to it after it's already running.

Now, those differences might not primarily be due to optimisations (a large part of the difference is probably ensuring that the PDB info is maintained in the generated machine code), but you'll see far more 'method is optimised' messages in the stack trace when attaching to a release process than you will, if at all, when running it with the debugger attached from the start.

The thrust of my point here being that if the presence of a debugger can affect the JITing behaviour for statically built IL, then it probably affects its behaviour when JITing dynamically built IL, such as bound delegates or, in this case, expression trees. Just how different, though, I'm not sure we can say.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top