Question

In his answer's comment section, Apocalisp states the following:

Well, you did ask for a function. A side-effenting [sic] lexical closure is emphatically not a function.

What exactly does he mean by "side-effecting lexical closure", and how is that different from a function?

My guess is that they're trying to differentiate functions in a functional programming sense - where no side effects are allowed (such as changing state of variables or outputting values), from mere procedures, which do have side effects.

If that is the case, then does Scala make this distinction by itself, or is it merely left to the programmer? If so, is every callable (for the lack of a better term) that doesn't have side effects a function, and every callable that does a side effecting lexical closure?

Was it helpful?

Solution

A closure is just a definition with "free variables", together with an external environment that provides bindings for those free variables. In x + 1 x is a free variable; there's no definition of what x is, and so you can only say what value this expression has in an environment that includes an x. In y => x + y x is still free but y is not; the expression defines a function, and the function's parameter is the binding for y.

A "lexical closure" in the context of Scala is basically just a "closure" (because all its closures are lexical) but technically "closure" is a more general concept. "Closure" on its own doesn't say where the environment comes from; "lexical closure" specifies that it is the lexical scope of the definition of the closure (i.e. where it occurs in the source code) that determines the environment.

So a "side effecting lexical closure" is just one of those, that has side effects.


My take on that comment is that Apocalisp was contrasting the mathematical idea of a function with the programming idea of a parameterised block of code to execute. Whether or not that's what he was thinking, I'll expand on my thinking on that:

In mathematics a function is basically just a mapping from values in some input set (the function's domain) to values in some output set (its codomain). Under this view a function isn't a special restricted form of procedure where we disallow side effects, the concept of "having side effects" just don't apply to it. Asking whether a mathematical function has side effects is like asking whether the colour yellow has side effects; even the answer "no" isn't really correct. All you can do with a mathematical function is ask what value in its codomain corresponds to a given value in its domain; if I have a function described by { 1 -> 11, 2 -> 22, 3 -> 33 } and I ask what codomain value corresponds to 2, it doesn't make sense to answer "22, and object foo's count attribute is now 7".

In idealised functional programming we view code as merely a way to define the mappings that correspond to the functions we want to define. Most interesting functions are infinite (or at least impractically vast), so we don't do it by writing out a literal map from inputs to outputs but rather by writing down more-or-less abstract rules that describe how outputs correspond to inputs. Of course in practice we do spend a lot of time thinking operationally about how our code will be executed too, but generally a functional programmer would rather think about definitions first.

On the other hand, what is called a function in traditional imperative programming has very little to do with the mathematical notion of a function; rather than a mapping from values in a domain to values in a codomain, a programmer's function is a sequence of steps to be executed one after the other (possibly parameterised by input values and returning an output value). Each of these steps may have effects, so you can't ignore their existence and say that it's just another way to defined the domain -> codomain mapping, and can't examine them as things on their own independent of their context.

In a programming language that wants to support both functional programming and imperative programming, you use the same language elements to define both mathematical functions and programmer's functions. Or if you use the term function exclusively to refer to mathematical functions, you use the same language elements to define both functions and "something else that isn't a function". I took Apolalisp's phrase "lexical closure" as describing what Scala's function-definition-syntax defines apart from the notion of function, and when you further add that it's a "side effecting lexical closure" then it's definitely not a function that you're talking about.

OTHER TIPS

I think the terms you're looking for are pure and impure function.

A functional language that explicitly distinct between functions with and without side effects is called a pure functional language. A popular example is Haskell.

Scala, on the other hand, is not pure. So the distinction is indeed left to the programmer.

A lexical closure is a function combined with it's execution environment. As for "side-effecting lexical closure" - just read "side-effecting function".

He's just being strict with his purity discipline - a pure function must return same value each time it is called, and the function f() in that question clearly is not. It returns different value because it mutates its enclosed variables.

But a mutating lexical closure need not be considered side-effecting, as long as the changes to its lexically enclosed variables are not directly observable by any outside entity except the function itself - such variables are encapsulated, hidden.

Haskell does the same, without any use of any monads, here:

fibgen (a,b) = a : fibgen (b,a+b)
fibs = fibgen (0,1)
f5 = fibs !! 5
another_fibs = fibgen (0,1)

A compiler could feasibly compile this as a "side-effecting closure" by reusing the stack frame for a fibgen call, thus in effect turning it into a generator.

Saying "no mutation should be performed absolutely never ever" is like saying "no goto should be ever performed". We just must be careful in our separation of functions and generators. Even if the latter look like the former, they are not. That's all.

The real problem with the code in that question is that it directly creates such closure as a global object, so there's no way to restart the sequence. The creation of such generating closures must be controlled -- just make the initial values into parameters and make the creation of new closures explicit:

f1 = make_new_fib_closure(0,1);
f1(), f1(), f1(), ...
....
f2 = make_new_fib_closure(0,1);
....

Now make_new_fib_closure() is a function, creating equal values on each call -- fionacci sequences, seen as a whole. Each f_i encapsulates one instance of the fibonacci sequence, with its own state - how far has it been generated. Calling f_i() is equivalent to calling a next() method on a generator. Nothing wrong with that.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top