Question

Recently I was asked to refactor some code that leverages JavaScript's array.reduce() method because other developers felt the code hard to read. While doing this I decided to play around with some JavaScript array iteration performance benchmarks, to help me validate various approaches. I wanted to know what the fastest way to reduce an array would be, but I don't know if I can trust the initial results:

http://jsbench.github.io/#8803598b7401b38d8d09eb7f13f0709a enter image description here enter image description here

I added the test cases for "while loop array.pop() assignments" to the benchmarks linked above (mostly for the fun of it) but I think there must be something wrong with the tests. The variation in ops/sec seem to large to be accurate. I fear that something is wrong with my test case as I don't understand why this method would be so much faster.

I have researched this quite a bit and have found a lot of conflicting information from over the years. I want to better understand what specificity is causing the high variance measured in the benchmark test linked above. Which leads to this post: Given the benchmark example (linked above and shown below), why would the While Loop test case, measure over 5000x faster than its the For Loop counterpart?

//Benchmark Setup
var arr = [];
var sum = 0; //set to 0 before each test
for (var i = 0; i < 1000; i++) {
  arr[i] = Math.random();
}
// Test Case #1
// While loop, implicit comparison, inline pop code
var i = array.length; 
while ( i-- ) {
    sum += array.pop();
}
// Test Case #2
// Reverse loop, implicit comparison, inline code
for ( var i = arr.length; i--; ) {
    sum += arr[i];
}

*Edited

In response to the downvotes. I want this post to be useful. I am adding images to provide context for the links. I removed unnecessary details and refined the content to focus on the questions I am seeking answers for. I also removed a previous example that was confusing.

Was it helpful?

Solution

I noticed in your current test framework that you are now playing around with copying the array prior to doing the summing. I tried this myself and the change destroys the pop() performance advantage. It's actually slower than the array indexing version. The array indexing solution seems unaffected by this change.

I think this gives us a clue to what's going on here. I believe the setup function being optimized together with the test function. I noticed that the setup time is part of the performance test. For example, putting console.log calls into setup will dramatically decrease the test operations per second.

When you build an array and then pop every element, you are left with an empty array. What's likely happening is that the optimizer is able to determine that the array is irrelevant and eliminates it or at least eliminates filling it with values. Then the function becomes simply a loop that adds numbers. It may even be optimizing that further. So while interesting, this experiment probably has little bearing on your real-world problem.

Licensed under: CC-BY-SA with attribution
scroll top