Frage

In the simple code below, part of the definition of a function that deletes an element from a binary search tree:

 deleteB x (Node n l r) | x == n = Node (leastB r) l (deleteB (leastB r) r)

does the compiler optimize the code so that it calls (least B r) only once as if it were:

 deleteB x (Node n l r) | x == n = Node k l (deleteB k r)
                          where k = leastB r

?

In other words, is the compiler able to understand that since parameter r isn't changed within the body of the function deleteB, the result of the call of the same function (leastB) on it can't give different results, hence it is useless to compute it twice?

More generally, how would I be able to understand if the compiler does this optimization or not in case amazing stackoverflow did not exist? thanks

War es hilfreich?

Lösung 2

GHC does not perform this optimization because it is not always an optimization space-wise.

For instance, consider

n = 1000000
x = (length $ map succ [1..n], length $ map pred [1..n])

On a lazy language such as Haskell, one would expect this to run in constant space. Indeed, the list generating expression [1..n] should lazily produce an element at a time, which would be affected by succ/pred because of the maps, and then counted by length. (Even better, succ and pred are not computed at all since length does not force the list elements). After this, the produced element can be garbage collected, and the list generator can produce the next element, and so on. In real implementations, one would not expect every single element to be garbage collected immediately, but if the garbage collector is good, only a constant amount of them should be in memory at any time.

By comparison, the "optimized" code

n = 1000000
l = [1..n]
x = (length $ map succ l, length $ map pred l)

does not allow to garbage collect the elements of l until both components of x are evaluated. So, while it produces the list only once, it uses O(n) words of memory to store the full list. This is likely to lead to a lower performance than the unoptimized code.

Andere Tipps

If you want to know what GHC "really did", you want to look at the "Core" output.

GHC takes your Haskell source code, which is extremely high-level, and transforms it into a sequence of lower and lower-level languages:

Haskell ⇒ Core ⇒ STG ⇒ C−− ⇒ assembly language ⇒ machine code

Almost all of the high-level optimisations happen in Core. The one you're asking about is basically "common subexpression elimination" (CSE). If you think about it, this is a time / space tradeoff; by saving the previous result, you're using less CPU time, but also using more RAM. If the result you're trying to store is tiny (i.e., an integer), this can be worth it. If the result is huge (i.e., the entire contents of that 17GB text file you just loaded), this is probably a Very Bad Idea.

As I understand it (⇒ not very well!), GHC tends not to do CSE. But if you want to know for sure, in your specific case, you want to look at the Core that your program has actually been compiled into. I believe the switch you want is --ddump-prep.

http://www.haskell.org/ghc/docs/7.0.2/html/users_guide/options-debugging.html

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top