Question

I was messing around my IL code inside of my dll file (just for learning purposes). I wanted to see what would happen if I injected my own IL code, for example... I threw in a box call: IL_9999: box !T. I'm wondering if the offset value actually matters though...

Does an IL offset of 9999 cost more in terms of performance than an IL offset of say 1000? My guess is it doesn't since the compiler itself, while going in order, skips offsets:

IL_000d:  ldc.i4.3
IL_000e:  box        [mscorlib]System.Int32
IL_0013:  call       instance int32 [mscorlib]System.Enum::CompareTo(object)
IL_0018:  call       void [mscorlib]System.Console::WriteLine(int32)
IL_001d:  nop

Also, does it harm my application if my IL offset of 9999 is thrown into the middle of these other IL offset calls that are far lower in value?

Was it helpful?

Solution

All of those IL_XXXX: are not actually offsets, they are labels. It's just that decompilers tend to add a label to each instruction and name it according to its offset. This is because label names are not preserved in the compiled code (and for example compiled C# probably wouldn't have meaningful label names even if they were preserved) and the decompiler needs to have at least some labels to use in branching instructions.

So, if you modify the “offset” and then compile the code using ilasm, you're actually not changing the code in any way. Because of that, it can't have any effect on performance.

One way you could verify this would be by decompiling your modified assembly again.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top