Question

Lenses don't seem to have any disadvantages while having significant advantages over standard Haskell: Is there any reason I shouldn't use lenses wherever possible? Are there performance considerations? Additionally, does template Haskell have any significant overhead?

Was it helpful?

Solution

Lenses form an alternative to using direct closures over data constructors. Lenses therefore have approximately the same caveats as using functions and data constructors directly.

Some of the cons because of this:

  • Every time you modify a lens, you might potentially cause a lot of objects to be (re)created. For example, if you have this data structure:

    A { B { C { bla = "foo" } } }
    

    ...and the lens of type Lens A String, you will create a new A, B and C every time you "modify" that lens. This is nothing unusual in Haskell (creating lots of objects), but the object creation is hidden behind the lens, making it hard to spot as a potential performance sink.

  • A lens could also create inefficiencies due to the "mapping function" being used. For example, if you make a lens that modifies the 26th element in a list, it might cause a lot of slowdowns due to the lookup time.

And the pros:

  • Lenses, in combination with normal records, can be used beautifully with the state monads (see data-lens-fd for an example), and this makes it possible to avoid recreating a lot of objects most of the time, due to extensive data sharing. See for example the focus function, and a similar pattern of using withSomething functions in the Snap Web framework.
  • Lenses obviously don't actually modify any memory in-place, so they are very useful when you need to reason about state in the context of concurrency. Lenses would therefore be very useful when dealing with graphs of various kinds.

Lenses are not always isomorphic to closures over data constructors, however. Here are some of the differences (taking data-lens as the implementation here):

  • Most lens implementations use some form of data type to store the "accessor" and "mutator" as a pair. For data-lens, it's the Store comonad. This means that every time you create a lens, there'll be a very small extra overhead due to that data structure being created.
  • Because lenses depend on values via some unknown mapping, it might become harder to reason about garbage collection, and you might get (logical) memory leaks because you forgot that you are using a very generic lens that depends on some large chunk of memory. Take, for example, a lens that accesses an element in some large vector, which is composed with another lens that thus hides the first lens, making it difficult to see that the composed lens still depends on the large amount of memory.

Template Haskell code runs at compile time, and does not affect the runtime performance of lenses whatsoever.

OTHER TIPS

I'm assuming the data-lens package. Lenses performed very well for me for data-like things (records, tuples, maps, etc.). In fact they sometimes even performed better than the normal approach, probably because of better sharing. In terms of performance it produces about the same performance as the code you would have written by hand.

However, there are function-like things, for which lenses can have a penalty. For example I remember at least once using a lens like this one:

result :: (Eq a) => a -> Lens (a -> b) b

While queries were very fast, I occasionally overrode certain result values of the function to adjust it to particular scenarios, which is equivalent to enclosing the function's body in a large if. The performance implications are not related to lenses themselves, of course, but it is something noteworthy.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top