What did Joel Spolsky mean in the paradox of a variable's value not changing and changing at the same time?

StackOverflow https://stackoverflow.com/questions/18169520

문제

I read The perils of Java Schools about 8 months ago and since then I have been using it as a check list for the things I should learn soon. I understand most of what he is speaking about.

There's however this part that I am not quite sure about:

in a purely functional program, the value of a variable never changes, and yet, it changes all the time! A paradox!

What I'm getting from this(forgive me if I'm wrong) is that he's talking about recursion but recursion seems
too simple a concept. Here's my line of thought:

(define (tail-rec n) 
  (if (= n 1) 
    (display "Done!") 
    (begin 
       (display n)
       (newline) 
       (tail-rec (- n 1)))))

The value of n in the tail recursive function tail-rec does not change yet when you look at what is being output, you see that it actually changes. Also since the function itself is not changing the state
of any variable, it means that it is purely functional.
Soo...
Am I right about this?
Is this what Joel was talking about? If not, please correct me.

도움이 되었습니까?

해결책

To answer your question, Joel's article was indeed referring to recursion in that context.

However, your code isn't actually pure-functional, because display and newline have side effects. So I'm going to go off on a tangent and give you some tangible examples of pure-functional recursive code.

Consider a greatest common divisor function:

(define (gcd x y)
  (if (zero? y)
      x
      (gcd y (modulo x y))))

Here, each time where the remainder of dividing x by y is non-zero, it (tail-)recursively calls gcd again, where in this new call, the x and y values are different. (In particular, the y value gets smaller each iteration, eventually leading to a base case where x divides y exactly, even if y has to be 1 in order for that to happen.)

Here's another case of pure-functional recursion (not tail recursion, this time, though it's possible to implement with tail recursion too), used to implement an algorithm for merging two sorted lists of numbers:

(define (merge lhs rhs)
  (cond ((null? lhs) rhs)
        ((null? rhs) lhs)
        ((< (car rhs) (car lhs))
         (cons (car rhs) (merge lhs (cdr rhs))))
        (else
         (cons (car lhs) (merge (cdr lhs) rhs)))))

Again, whenever we're dealing with cases where both lhs and rhs are non-empty, we recurse into another call into merge, with one of the lists being shorter in each call, and we eventually get to a base case where one of the lists is empty.

This merge function can be used to implement mergesort:

(define (mergesort lst)
  (let ((len (length lst)))
    (if (< len 2)
        lst
        (let-values (((lhs rhs) (split-at lst (quotient len 2))))
          (merge (mergesort lhs) (mergesort rhs))))))

This is the classic example of a divide-and-conquer algorithm. Each time the list has 2 or more elements, split the list into left and right halves, recurse into each half, then merge the sorted halves back together.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top