سؤال

I have the two following pieces of code in C# and D, the goal was to compare speed in a simple loop.

D:

import std.stdio;
import std.datetime;

void main() {
    StopWatch timer;
    long summer = 0;
    timer.start();
    for (long i = 0; i < 10000000000; i++){
        summer++;
    }
    timer.stop();
    long interval_t = timer.peek().msecs;
    writeln(interval_t);
}

Output: about 30 seconds

C#:

using System;
using System.Diagnostics;

class Program{
    static void Main(){
        Stopwatch timer = new Stopwatch();
        timer.Start();
        long summer = 0;
        for(long i = 0; i < 10000000000; i++){
            summer++;
        }
        timer.Stop();
        Console.WriteLine(timer.ElapsedMilliseconds);
    }
}

Output: about 8 seconds

Why is the C# code so much faster?

هل كانت مفيدة؟

المحلول

There's a little more to this than just saying: "You didn't turn on the optimizer."

At least at a guess, you didn't (initially) turn on the optimizer in either case. Despite this, the C# version without optimization turned on ran almost as fast as the D version with optimization. Why would that be?

The answer stems from the difference in compilation models. D does static compilation, so the source is translated to an executable containing machine code, which then executes. The only optimization that happens is whatever is done during that static compilation.

C#, by contrast, translates from source code to MSIL, an intermediate language (i.e., basically a bytecode). That is then translated to machine language by the JIT compiler built into the CLR (common language runtime--Microsoft's virtual machine for MSIL). You can specify optimization when you run the C# compiler. That only controls optimization when doing the initial compilation from source to byte code. When you run the code, the JIT compiler does its thing--and it does its optimization whether you specify optimization in the initial translation from source to byte code or not. That's why you get much faster results with C# than with D when you didn't specify optimization with either one.

I feel obliged to add, however, that both results you got (7 and 8 seconds for D and C# respectively) are really pretty lousy. A decent optimizer should recognize that the final output didn't depend on the loop at all, and based on that it should eliminate the loop completely. Just for comparison, I did (about) the most straightforward C++ translation I could:

#include <iostream>
#include <time.h>

int main() {
    long summer = 0;
    auto start = clock();
    for (long i = 0; i < 10000000000; i++)
        summer++;
    std::cout << double(clock() - start) / CLOCKS_PER_SEC;
}

Compiled with VC++ using cl /O2b2 /GL, this consistently shows a time of 0.

نصائح أخرى

I believe your question should be titled:

Why are for loops compiled by <insert your D compiler here> so much slower than for loops compiled by <insert your C# compiler/runtime here>?

Performance can vary dramatically across implementations, and is not a trait of the language itself. You are probably using DMD, the reference D compiler, which is not known for using a highly-optimizing backend. For best performance, try the GDC or LDC compilers.

You should also post the compilation options you used (optimizations may have been enabled with only one compiler).

See this question for more information: How fast is D compared to C++?

Several answers have suggested that an optimizer would optimize the entire loop away.

Mostly they explicitly don't do that as they expect the programmer coded the loop that way as a timing loop.

This technique is often used in hardware drivers to wait for time periods shorter than the time taken to set a timer and handle the timer interrupt.

This is the reason for the "bogomips" calculation at linux boot time... To calibrate how many iterations of a tight loop per second this particular CPU/compiler can do.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top