Question

I've got the following code inside a <script> tag on a webpage with nothing else on it. I'm afraid I do not presently have it online. As you can see, it adds up all primes under two million, in two different ways, and calculates how long it took on average. The variable howOften is used to do this a number of times so you can average it out. What puzzles me is, for howOften == 1, method 2 is faster, but for howOften == 10, method 1 is. The difference is significant and holds even if you hit F5 a couple of times.

My question is simply: how come?

(This post has been edited to incorporate alf's suggestion. But that made no difference! I'm very much puzzled now.)

(Edited again: with howOften at or over 1000, the times seem stable. Alf's answer seems correct.)

function methodOne(maxN) {
    var sum, primes_inv, i, j;

    sum = 0;
    primes_inv = [];
    for ( var i = 2; i < maxN; ++i ) {
        if ( primes_inv[i] == undefined ) {
            sum += i;
            for ( var j = i; j < maxN; j += i ) {
                primes_inv[j] = true;
            }
        }
    }
    return sum;
}

function methodTwo(maxN) {
    var i, j, p, sum, ps, n;

    n = ((maxN - 2) / 2);
    sum = n * (n + 2);
    ps = [];
    for(i = 1; i <= n; i++) {
        for(j = i; j <= n; j++) {
            p =  i + j + 2 * i * j;
            if(p <= n) {
                if(ps[p] == undefined) {
                    sum -= p * 2 + 1;
                    ps[p] = true;
                }
            }
            else {
                break;
            }
        }
    }
    return sum + 2;
}



// ---------- parameters
var howOften = 10;
var maxN = 10000;

console.log('iterations: ', howOften);
console.log('maxN: ', maxN);


// ---------- dry runs for warm-up
for( i = 0; i < 1000; i++ ) {
    sum = methodOne(maxN);
    sum = methodTwo(maxN);
}

// ---------- method one
var start = (new Date).getTime();

for( i = 0; i < howOften; i++ )
    sum = methodOne(maxN);

var stop = (new Date).getTime();
console.log('methodOne: ', (stop - start) / howOften);

// ---------- method two

for( i = 0; i < howOften; i++ )
    sum = methodTwo(maxN);

var stop2 = (new Date).getTime();
console.log('methodTwo: ', (stop2 - stop) / howOften);
Was it helpful?

Solution

Well, JS runtime is an optimized JIT compiler. Which means that for a while, your code is interpreted (tint), after that, it gets compiled (tjit), and finally you run a compiled code (trun).

Now what you calculate is most probably (tint+tjit+trun)/N. Given that the only part depending almost-linearly on N is trun, this comparison soes not make much sense, unfortunately.

So the answer is, I don't know. To have a proper answer,

  1. Extract the code you are trying to profile into functions
  2. Run warm-up cycles on these functions, and do not use timing from the warm-up cycles
  3. Run much more than 1..10 times, both for warm-up and measurement
  4. Try swapping the order in which you measure time for algorithms
  5. Get into JS interpretator internals if you can and make sure you understand what happens: do you really measure what you think you do? Is JIT run during the warm-up cycles and not while you measure? Etc., etc.

Update: note also that for 1 cycle, you get run time less than the resolution of the system timer, which means the mistake is probably bigger than the actual values you compare.

OTHER TIPS

methodTwo simply requires that the processor execute fewer calculations. In methodOne your initial for loop is executing maxN times. In methodTwo your initial for loop is executing (maxN -2)/2 times. So in the second method the processor is doing less than half the number of calculations that the first method is doing. This is compounded by the fact that each method contains a nested for loop. So big-O of methodOne is maxN^2. Whereas big-O of methodTwo is ((maxN -2)/2)^2.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top