There's a little more to this than just saying: "You didn't turn on the optimizer."
At least at a guess, you didn't (initially) turn on the optimizer in either case. Despite this, the C# version without optimization turned on ran almost as fast as the D version with optimization. Why would that be?
The answer stems from the difference in compilation models. D does static compilation, so the source is translated to an executable containing machine code, which then executes. The only optimization that happens is whatever is done during that static compilation.
C#, by contrast, translates from source code to MSIL, an intermediate language (i.e., basically a bytecode). That is then translated to machine language by the JIT compiler built into the CLR (common language runtime--Microsoft's virtual machine for MSIL). You can specify optimization when you run the C# compiler. That only controls optimization when doing the initial compilation from source to byte code. When you run the code, the JIT compiler does its thing--and it does its optimization whether you specify optimization in the initial translation from source to byte code or not. That's why you get much faster results with C# than with D when you didn't specify optimization with either one.
I feel obliged to add, however, that both results you got (7 and 8 seconds for D and C# respectively) are really pretty lousy. A decent optimizer should recognize that the final output didn't depend on the loop at all, and based on that it should eliminate the loop completely. Just for comparison, I did (about) the most straightforward C++ translation I could:
#include <iostream>
#include <time.h>
int main() {
long summer = 0;
auto start = clock();
for (long i = 0; i < 10000000000; i++)
summer++;
std::cout << double(clock() - start) / CLOCKS_PER_SEC;
}
Compiled with VC++ using cl /O2b2 /GL
, this consistently shows a time of 0.