Question

Here's an example of a snippet of code that, at first impression, looks like something that scalac could easily optimize away:

val t0 = System.nanoTime()
for (i <- 0 to 1000000000) {}
val t1 = System.nanoTime()
var i = 0
while (i < 1000000000) i += 1
val t2 = System.nanoTime()

println((t1 - t0).toDouble / (t2 - t1).toDouble)

The above code prints 76.30068413477652, and the ratio seems to get worse as the number of iterations is increased.

Is there a particular reason scalac chooses to not optimize for (i <- L to/until H) into whatever bytecode form javac generates for for (int i = L; i < H; i += 1)? Might it be because Scala chooses to keep stuff simple and expect the developer to simply resort to the more performant forms such as a while loop when raw looping speed is required? If yes, why is that good, given the frequency of such simple for loops?

Was it helpful?

Solution

for-comprehensions performance in Scala is a very long running debate right now.

See the following links:

TL,DR: the Scala team decided to concentrate on more general optimizations than the ones that would have to favour some particular classes and edge-cases (in this case: Range).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top