Pergunta

I have been reading a lot about Haskell lately, and the benefits that it derives from being a purely functional language. (I'm not interested in discussing monads for Lisp) It makes sense to me to (at least logically) isolate functions with side-effects as much as possible. I have used setf and other destructive functions plenty, and I recognize the need for them in Lisp and (most of) its derivatives.

Here we go:

  1. Would something like (declare pure) potentially help an optimizing compiler? Or is this a moot point because it already knows?
  2. Would the declaration help in proving a function or program, or at least a subset that was declared as pure? Or is this again something that is unnecessary because it's already obvious to the programmer and compiler and prover?
  3. If for nothing else, would it be useful to a programmer for the compiler to enforce purity for functions with this declaration and add to the readability/maintainablity of Lisp programs?
  4. Does any of this make any sense? Or am I too tired to even think right now?

I'd appreciate any insights here. Info on compiler implementation or provability is welcome.

EDIT

To clarify, I didn't intend to restrict this question to Common Lisp. It clearly (I think) doesn't apply to certain derivative languages, but I'm also curious if some features of other Lisps may tend to support (or not) this kind of facility.

Foi útil?

Solução

You have two answers but neither touch on the real problem.

First, yes, it would obviously be good to know that a function is pure. There's a ton of compiler level things that would like to know that, as well as user level things. Given that lisp languages are so flexible, you could twist things a bit: instead of a "pure" declaration that asks the compiler to try harder or something, you just make the declaration restrict the code in the definition. This way you can guarantee that the function is pure.

You can even do that with additional supporting facilities -- I mentioned two of them in a comment I made to johanbev's answer: add the notion of immutable bindings and immutable data structures. I know that in Common Lisp these are very problematic, especially immutable bindings (since CL loads code by "side-effecting" it into place). But such features will help simplifying things, and they're not inconceivable (see for example the Racket implementation that has immutable pairs and other data structures, and has immutable bindings.

But the real question is what can you do in such restricted functions. Even a very simple looking problem would be infested with issues. (I'm using Scheme-like syntax for this.)

(define-pure (foo x)
  (cons (+ x 1) (bar)))

Seems easy enough to tell that this function is indeed pure, it doesn't do anything . Also, seems that having define-pure restrict the body and allow only pure code would work fine in this case, and will allow this definition.

Now start with the problems:

  1. It's calling cons, so it assumes that it is also known to be pure. In addition, as I mentioned above, it should rely on cons being what it is, so assume that the cons binding is immutable. Easy, since it's a known builtin. Do the same with bar, of course.

  2. But cons does have a side effect (even if you're talking about Racket's immutable pairs): it allocates a new pair. This seems like a minor and ignorable point, but, for example, if you allow such things to appear in pure functions, then you won't be able to auto-memoize them. The problem is that someone might rely on every foo call returning a new pair -- one that is not-eq to any other existing pair. Seems that to make it fine you need to further restrict pure functions to deal not only with immutable values, but also values where the constructor doesn't always create a new value (eg, it could hash-cons instead of allocate).

  3. But that code also calls bar -- so no you need to make the same assumptions on bar: it must be known as a pure function, with an immutable binding. Note specifically that bar receives no arguments -- so in that case the compiler could not only require that bar is a pure function, it could also use that information and pre-compute its value. After all, a pure function with no inputs could be reduced to a plain value. (Note BTW that Haskell doesn't have zero-argument functions.)

  4. And that brings another big issue in. What if bar is a function of one input? In that case you'd have an error, and some exception will get thrown ... and that's no longer pure. Exceptions are side-effects. You now need to know the arity of bar in addition to everything else, and you need to avoid other exceptions. Now, how about that input x -- what happens if it isn't a number? That will throw an exception too, so you need to avoid it too. This means that you now need a type system.

  5. Change that (+ x 1) to (/ 1 x) and you can see that not only do you need a type system, you need one that is sophisticated enough to distinguish 0s.

  6. Alternatively, you could re-think the whole thing and have new pure arithmetic operations that never throw exceptions -- but with all the other restrictions you're now quite a long way from home, with a language that is radically different.

  7. Finally, there's one more side-effect that remains a PITA: what if the definition of bar is (define-pure (bar) (bar))? It certainly is pure according to all of the above restrictions... But diverging is a form of a side effect, so even this is no longer kosher. (For example, if you did make your compiler optimize nullary functions to values, then for this example the compiler itself would get stuck in an infinite loop.) (And yes, Haskell doesn't deal with that, it doesn't make it less of an issue.)

Outras dicas

Given a Lisp function, knowing if it is pure or not is undecidable in general. Of course, necessary conditions and sufficient conditions can be tested at compile time. (If there are no impure operations at all, then the function must be pure; if an impure operation gets executed unconditionally, then the function must be impure; for more complicated cases, the compiler could try to prove that the function is pure or impure, but it will not succeed in all cases.)

  1. If the user can manually annotate a function as pure, then the compiler could either (a.) try harder to prove that the function is pure, ie. spend more time before giving up, or (b.) assume that it is and add optimizations which would not be correct for impure functions (like, say, memoizing results). So, yes, annotating functions as pure could help the compiler if the annotations are assumed to be correct.

  2. Apart from heuristics like the "trying harder" idea above, the annotation would not help to prove stuff, because it's not giving any information to the prover. (In other words, the prover could just assume that the annotation is always there before trying.) However, it could make sense to attach to pure functions a proof of their purity.

  3. The compiler could either (a.) check if pure functions are indeed pure at compile time, but this is undecidable in general, or (b.) add code to try to catch side effects in pure functions at runtime and report those as an error. (a.) would probably be helpful with simple heuristics (like "an impure operation gets executed unconditionally), (b.) would be useful for debug.

  4. No, it seems to make sense. Hopefully this answer also does.

The usual goodies apply when we can assume purity and referential transparency. We can automatically memoize hotspots. We can automatically parallelize computation. We can deal away with a lot of race conditions. We can also use structure sharing with data that we know cannot be modified, for instance the (quasi) primitive ``cons()'' does not need to copy the cons-cells in the list it's consing to. These cells are not affected in any way by having another cons-cell pointing to it. This example is kinda obvious, but compilers are often good performers in figuring out more complex structure sharing.

However, actually determining if a lambda (a function) is pure or has referential transparency is very tricky in Common Lisp. Remember that a funcall (foo bar) start by looking at (symbol-function foo). So in this case

(defun foo (bar)
  (cons 'zot bar))

foo() is pure.

The next lambda is also pure.

(defun quux ()
 (mapcar #'foo '(zong ding flop)))

However, later on we can redefine foo:

(let ((accu -1))
 (defun foo (bar)
   (incf accu)))

The next call to quux() is no longer pure! The old pure foo() has been redefined to an impure lambda. Yikes. This example is maybe somewhat contrived but it's not that uncommon to lexically redefine some functions, for instance with a let block. In that case it's not possible to know what would happen at compile time.

Common Lisp has a very dynamic semantic, so actually being able to determine control flow and data flow ahead of time (for instance when compiling) is very hard, and in most useful cases entirely undecidable. This is quite typical of languages with dynamic type systems. There is a lot of common idioms in Lisp you cannot use if you must use static typing. It's mainly these that fouls any attempt to do much meaningful static analysis. We can do it for primitives like cons and friends. But for lambdas involving other things than primitives we are in much deeper water, especially in the cases where we need to look at complex interplay between functions. Remember that a lambda is only pure if all the lambdas it calls are also pure.

On the top of my head, it could be possible, with some deep macrology, to do away with the redefinition problem. In a sense, each lambda gets an extra argument which is a monad that represents the entire state of the lisp image (we can obviously restrict ourselves to what the function will actually look at). But it's probably more useful to be able do declare purity ourselves, in the sense that we promise the compiler that this lambda is indeed pure. The consequences if it isn't is then undefined, and all sorts of mayhem could ensue...

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top