Question

I'm looking for a Haskell compiler that uses strict evaluation by default instead of lazy evaluation. I would just use OCaml, but Haskell's syntax is so much better than OCaml's (and Haskell is pure, and has cool features such as type classes).

I'd really rather not constantly put !s and $!s all over my program. A compiler with a switch or a preprocessor to put in the strictness annotations would be really nice. It would also be helpful if there was a way to use lazy evaluation in certain places too, just in case I want something like an infinite list (I probably never will).

Please do not try to convince me that lazy evaluation is better, I really need the performance. IIRC, Simon Peyton Jones even said that lazy evaluation wasn't really necessary, it was there mostly to prevent them from making the language impure.

Was it helpful?

Solution

If you have a Haskell compiler that uses strict evaluation, it doesn't compile Haskell. Laziness Non-strictness is part of the Haskell spec!

However, there are alternatives.

  • DDC is an attempt to create an explicitly lazy variant of Haskell which supports things like destructive update whilst retaining all the rest of Haskell's goodness. There is one problem: the compiler is currently only in the α-stage, although it seems to be at least usable.

  • Create a preprocessor, as others have done.

  • Learn to use Haskell “the right way”. If you can simplify your test case down to something which is publicly-displayable, you could post it on the Haskell-Café mailing list, where people are very helpful with these sorts of questions concerning the effects of non-strictness.

OTHER TIPS

I'd really rather not constantly put !s and $!s all over my program

You're doing it wrong, if that's how you're programming Haskell :) You simply won't need to do this. Use GHC, use -O2, use strict data types when appropriate, use lazy ones when appropriate. Don't assume laziness is going to be a problem - it is a solution to a lot of problems.

There have been two attempts at strictly evaluating Haskell in the past:

But both were focused on sticking to Haskell's non-strict semantics but using a mostly-strict evaluation strategy, rather than actually changing the semantics, and neither ever really saw the light of day.

Edit: Martijn's suggestion of strict-plugin looks ideal for your purposes as it actually does what you want and the author is still active in the Haskell community, I'd forgotten about it.

See also ghc-strict-plugin, an example for GHC's plugin framework, described in the Monad Reader 12.

I feel your pain. My biggest PITA in my day-to-day programming is dealing with those !@#$%^&( space leaks.

However, if it helps, with time you do learn (the hard way) about how to deal with this, and it does get better. But I'm still waiting for Andy Gill to come out with his magical space leak profiler to fix all of my problems. (I'm taking his off-hand comment to me at the last ICFP that he'd dreamed up this cool idea as a promise to implement it.)

I won't try to convince you that lazy evaluation is the best thing in the world, but there are certain good points about it. I've got some stream-processing programs that scoot lazy lists through any variety of combinators that run happily on gigabytes of data while using only 3.5 MB or so of memory (of which more than 2MB is GHC runtime). And someone smarter than I am pointed out to me last year that you would really be quite surprised, as a typical Haskell programmer, how much you depend on lazy evaluation.

But what we really need is a really good book on dealing with lazy evaluation in the real world (which is not so different from the academic world, really, except they simply don't get a paper published, and we get clients coming after us with knives) that will properly cover most of the issues relating to this and, more importantly, give us an intuitive sense of what's going to explode our heap and what isn't.

I don't think that this is a new thing; I'm sure other languages and architectures have been through this too. How did the first programmers to deal with hardware stacks and all that, after all? Not so well, I bet.

I think that Jan-Willem Maessan's pH compiler is/was strict. The next closest is Robert Ennal's speculative evaluation fork for ghc 5. The spec_eval fork is not strict, but instead optimistically evaluates. I don't know if either of those are still current/usable/etc.

Using nfdata and rnf everywhere isn't a solution since it means repeatedly traversing large structures that have already been evaluated.

The introductory chapter of Ben Lippmeier's PhD thesis (about DDC) is about the best critique of Haskell that I've seen--it discusses issues of laziness, destructive update, monad transformers, etc. DDC has laziness but you have to request it explicitly, and it's considered an effect, which is tracked and managed by DDC's type-and-effect system.

I recently saw some work in this area:

https://ghc.haskell.org/trac/ghc/wiki/StrictPragma

You can hear a tiny bit about it in SPJ's GHC status update here:

http://youtu.be/Ex79K4lvJno?t=9m33s (Link starts at the relevant piece at 9:33)

I'm looking for a Haskell compiler that uses strict evaluation by default instead of lazy evaluation.

Such a compiler would not be a Haskell compiler. If you really want, you could consider putting {-# LANGUAGE Strict #-} pragmas in your files. This will work with GHC 8.0.2, 8.2.2, and 8.4.1, aka the last three releases of the compiler.

It would also be helpful if there was a way to use lazy evaluation in certain places too, just in case I want something like an infinite list

There is no such method. Instead, use GHC as it was intended - as a lazy language. Learning to think about your code, profile, and use functional data structures correctly will be far more useful than mindlessly applying strictness pragmas everywhere. GHC already has a strictness analyzer.

(I probably never will).

That's exactly what the authors of llvm-hs thought when they chose to use a strict state monad rather than a lazy one. Instead, it caused an unexpected bug down the road. Laziness and recursion go hand-in-hand.

Please do not try to convince me that lazy evaluation is better, I really need the performance.

I'm dubious this is actually what you want when it doesn't reliably increase performance of Haskell code while simultaneously breaking existing code and making existing resources useless. If this is how you intend to write programs, please just use OCaml or Scala and leave the Haskell community alone.

IIRC, Simon Peyton Jones even said that lazy evaluation wasn't really necessary, it was there mostly to prevent them from making the language impure.

That is not true. You can read more on the actual history of Haskell here

There is also seqaid, which aims at the middle of the lazy-strict spectrum.

Seqaid is a GHC plugin providing non-invasive auto-instrumentation of Haskell projects, for dynamic strictness (and parallelism) control. This will soon include optimisation for automated space leak relief using minimal strictification.

You clearly have made up your mind on the value of strict evaluation, but I think you are missing the point of using Haskell. Haskell's lazy evaluation allows for much more flexible optimization strategies to be employed by the compiler/interpreter. Forcing your own strictness overrides the optimizer. In the end, using excessive strict evaluation will never be as efficient as the automated optimization. Try a folding sum over a sequence of numbers in GHCI, with and then without lazy evaluation. You can see the difference quite clearly -- in this case lazy evaluation is always faster.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top