Functional Programming in Scala explains a side effect’s impact on breaking referential transparency:

side effect, which implies some violation of referential transparency.

I’ve read part of SICP, which discusses using the “substitution model” to evaluate a program.

As I roughly understand the substitution model with referential transparency (RT), you can de-compose a function into its simplest parts. If the expression is RT, then you can de-compose the expression and always get the same result.

However, as the above quote states, using side effects can/will break the substitution model.

Example:

val x = foo(50) + bar(10)

If foo and bar do not have side effects, then executing either function will always return the same result to x. But, if they do have side effects, they will alter a variable that disrupts/throws a wrench into the substitution model.

I feel comfortable with this explanation, but I don’t fully grok it.

Please correct me and fill in any holes with respect to side effects breaking RT, discussing the effects on the substitution model as well.

有帮助吗?

解决方案

Let's begin with a definition for referential transparency:

An expression is said to be referentially transparent if it can be replaced with its value without changing the behavior of a program (in other words, yielding a program that has the same effects and output on the same input).

What that means is that (for example) you can replace 2 + 5 with 7 in any part of the program, and the program should still work. This process is called substitution. Substitution is valid if, and only if, 2 + 5 can be replaced with 7 without affecting any other part of the program.

Let's say that I have a class called Baz, with the functions Foo and Bar in it. For simplicity, we'll just say that Foo and Bar both return the value that is passed in. So Foo(2) + Bar(5) == 7, as you would expect. Referential Transparency guarantees that you can replace the expression Foo(2) + Bar(5) with the expression 7 anywhere in your program, and the program will still function identically.

But what if Foo returned the value passed in, but Bar returned the value passed in, plus the last value provided to Foo? That's easy enough to do if you store the value of Foo in a local variable within the Baz class. Well, if the initial value of that local variable is 0, the expression Foo(2) + Bar(5) will return the expected value of 7 the first time you invoke it, but it will return 9 the second time you invoke it.

This violates referential transparency two ways. First, Bar can not be counted on to return the same expression each time it is called. Second, a side-effect has occurred, namely that calling Foo influences the return value of Bar. Since you can no longer guarantee that Foo(2) + Bar(5) will equal 7, you can no longer substitute.

This is what Referential Transparency means in practice; a referentially transparent function accepts some value, and returns some corresponding value, without affecting other code elsewhere in the program, and always returns the same output given the same input.

其他提示

Imagine that you are trying to build a wall and you have been given an assortment of boxes in different sizes and shapes. You need to fill a particular L-shaped hole in the wall; should you look for an L-shaped box or can you substitute two straight boxes of the appropriate size?

In the functional world, the answer is that either solution will work. When building your functional world, you never have to open the boxes to see what is inside.

In the imperative world, it is dangerous to build your wall without inspecting the contents of every box and comparing them to the contents of every other box:

  • Some contain strong magnets and will push other magnetic boxes out of the wall if improperly aligned.
  • Some are very hot or cold and will react badly if placed in adjacent spaces.

I think I'll stop before I waste your time with more unlikely metaphors, but I hope the point is made; functional bricks contain no hidden surprises and are entirely predictable. Because you can always use smaller blocks of the right size and shape to substitute for a larger one and there is no difference between two boxes of the same size and shape, you have referential transparency. With imperative bricks, it isn't enough to have something the right size and shape - you have to know how the brick was constructed. Not referentially transparent.

In a pure functional language, all you need to see is a function's signature to know what it does. Of course, you might want to look inside to see how well it performs, but you don't have to look.

In an imperative language, you never know what surprises might hide inside.

As I roughly understand the substitution model (with referential transparency(RT)), you can de-compose a function into its simplest parts. If the expression is RT, then you can de-compose the expression and always get the same result.

Yes, the intuition is quite right. Here are a few pointers to get more precise:

Like you said, any RT expression should have a single "result". That is, given a factorial(5) expression in the program, it should always yield the same "result". So, if a certain factorial(5) is in the program and it yields 120, it should always yields 120 regardless of which "step order" it is expanded/computed -- regardless of time.

Example: the factorial function.

def factorial(n):
    if n == 1:
        return 1
    return n * factorial(n - 1)

There are a few considerations with this explanation.

First of all, keep in mind the different evaluation models (see applicative vs. normal order) may yield different "results" for the same RT expression.

def first(y, z):
  return y

def second(x):
  return second(x)

first(2, second(3)) # result depends on eval. model

In the code above, first and second are referentially transparent, and yet, the expression at the end yields different "results" if evaluated under normal order and applicative order (under the latter, the expression does not halt).

....which leads to the use of "result" in quotes. Since it is not required of an expression to halt, it may not produce a value. So using "result" is kind of blurry. One can say an RT expression always yields the same computations under an evaluation model.

Third, it may be required to see two foo(50) appearing in the program in different locations as different expressions -- each one yielding their own results that might differ from each other. For instance, if the language allows dynamic scope, both expressions, though lexically identical, are different. In perl:

sub foo {
    my $x = shift;
    return $x + $y; # y is dynamic scope var
}

sub a {
    local $y = 10;
    return &foo(50); # expanded to 60
}

sub b {
    local $y = 20;
    return &foo(50); # expanded to 70
}

Dynamic scope misleads because it make it easy for one to think x is the only input for foo, when in reality, it is x and y. One way to see the difference is to transform the program into an equivalent one without dynamic scope -- that is, passing explicitly the parameters, so instead of defining foo(x), we define foo(x, y) and pass y explicitly in the callers.

The point is, we are always under a function mindset: given a certain input for an expression, we are given a corresponding "result". If we give the same input, we should always expect the same "result".

Now, what about the following code?

def foo():
   global y
   y = y + 1
   return y

y = 10
foo() # yields 11
foo() # yields 12

The foo procedure breaks RT because there are redefinitions. That is, we defined y in one point, and latter on, redefined that same y. In the perl example above, the ys are different bindings though they share the same letter name "y". Here the ys are actually the same. That's why we say (re)assignment is a meta operation: you are in fact changing the definition of your program.

Roughly, people usually depict the difference as follows: in a side-effect free setting, you have a mapping from input -> output. In an "imperative" setting, you have input -> ouput in the context of a state that can change through time.

Now, instead of just substituting expressions for their corresponding values, one also has to apply transformations to the state at each operation that requires it (and of course, expressions may consult that same state to perform computations).

So, if in a side-effect free program all we need to know to compute an expression is its individual input, in an imperative program, we need to know the inputs and the entire state, for each computational step. Reasoning is the first to suffer a big blow (now, to debug a problematic procedure, you need the input and the core dump). Certain tricks are rendered impractical, like memoization. But also, concurrency and parallelism becomes much more challenging.

许可以下: CC-BY-SA归因
scroll top