Question

I found this answer on StackOverflow very clear to explain the difference beween the Logic Programming paradigm and the Functional Programming paradigm:

The thing that makes logical append different [from the functional one] is you can use it to compute the list that results from appending one list onto another, but you can also use it to compute the list you'd need to append onto the end of another to get a third list (or whether no such list exists), or to compute the list to which you need to append another to get a third list, or to give you two possible lists that can be appended together to get a given third (and to explore all possible ways of doing this).

What I am wondering about is how the Unfication used to map unbound variables to values in Logic Programming relates to the syntax in Functional Programming to create list comprehensions.

I.e. in Haskell you can write something like [(i,j) | i <- [1..10], j <- [5..20], j < i] and Haskell returns a list of all possible values of i and j that adhere to j < i. Taking only the head of this list (if it exists) seems to me to do exactly the same behaviour as what Prolog (and similar Logic Programming languages) would do, when given a predicate where one number ought to be smaller than the other.

Is this true or not?

Was it helpful?

Solution

The difference is that:

  1. Functional programming is based on reduction—rewriting complex expressions into irreducible values by using directional rewrite rules, with a strict sense of "input" vs. "output";
  2. Logic programming is based on constraint satisfaction—finding solutions to sets of statements by searching for values that, when plugged in for the statements' variables, make those statements true.

I.e. in Haskell you can write something like [(i,j) | i <- [1..10], j <- [5..20], j < i] and Haskell returns a list of all possible values of i and j that adhere to j < i.

But in Haskell the way this works is in terms of evaluating function applications. Your expression...

[(i,j) | i <- [1..10], j <- [5..20], j < i]

...is equivalent to this in Haskell:

concatMap (\i -> concatMap (\j -> if j < i then [(i,j)] else []) [5..20]) [1..10]

Where concatMap is a standard function that can be defined like this:

concatMap :: (a -> [b]) -> [a] -> [b]
concatMap _ [] = []
concatMap f (a:as) = f a ++ concatMap f as

-- concatMap uses this function as well:
(++) :: [a] -> [a] -> [a]
[] ++ ys = ys
(x:xs) ++ ys = x : (xs ++ ys)

And in Haskell evaluation, the equations that define the functions are only ever used in a "left to right" direction. To evaluate an expression, we search for occurrences that match the left hand sides of the equations, and substitute the right hand side counterparts—never the other way around. So:

concatMap (\i -> concatMap (\j -> if j < i then [(i,j)] else []) [5..20]) [1..10]

-- Reduce the outermost concatMap with its definition's second equation:
==>    concatMap (\j -> if j < 1 then [(1,j)] else []) [5..20] 
    ++ concatMap (\i -> concatMap (\j -> if j < i then [(i,j)] else []) [5..20]) [2..10]

-- Reduce the first `concatMap` with its definition's second equation:
==>    if 5 < 1 then [(1,5)] else []
    ++ concatMap (\j -> if j < 1 then [(1,j)] else []) [6..20] 
    ++ concatMap (\i -> concatMap (\j -> if j < i then [(i,j)] else []) [5..20]) [2..10]

-- Reduce `5 < 1` to `False`:
==>    if False then [(1,5)] else []
    ++ concatMap (\j -> if j < 1 then [(1,j)] else []) [6..20] 
    ++ concatMap (\i -> concatMap (\j -> if j < i then [(i,j)] else []) [5..20]) [2..10]

-- Reduce `if False x else y` to `y`
==>    []
    ++ concatMap (\j -> if j < 1 then [(1,j)] else []) [6..20] 
    ++ concatMap (\i -> concatMap (\j -> if j < i then [(i,j)] else []) [5..20]) [2..10]

-- Reduce the first `++` with its definition's first equation
==>    concatMap (\j -> if j < 1 then [(1,j)] else []) [6..20] 
    ++ concatMap (\i -> concatMap (\j -> if j < i then [(i,j)] else []) [5..20]) [2..10]

  .
  .
  .

This process is different and less powerful than unification.

OTHER TIPS

There are two things that are going on here. First, unification is an implementation mechanism. What is it an implementation mechanism for? Existential quantification. Unsurprisingly, the semantics of logic programming languages are grounded in logic. When you make a query in Prolog like ?- append(Xs,Ys,[1,2,3,4]), this is a search for a proof of exists Xs, Ys. append(Xs,Ys,[1,2,3,4]). A (constructive) proof of this proposition would be actual values for Xs and Ys that satisfy the append predicate. One way to find this proof (and all others) would be to exhaustively search for it. A much more efficient way is to use unification and the unification variable will effectively "collect" a set of constraints and those constraints will describe the set of solutions.

The list comprehension code corresponds to doing an exhaustive search. Unfortunately, exhaustive search is not just less efficient, it also has some additional problems. First, it may fail to terminate or fully explore the search space if not implemented carefully. Second, it provides no means of representing an infinite solution set short of enumerating all possible values. In Prolog, an unbound logic variable can stand in for an infinite set of results. (To be clear, though, most infinite sets of results cannot be represented just by having some unbound logic variables, and so Prolog too will just start enumerating all possible values in those cases. Constraint logic programming can extend the range of solution sets that can be compactly represented.)

Ignoring those issues though, we can think of list comprehensions (which correspond to using the List monad) as an embedding of a simple logic language into Haskell. There are much smarter monads to embed logic programming into Haskell that do use unification as the implementation mechanism. One older but particularly nice incarnation is described in Typed Logic Variables in Haskell.

That is to a large extent correct, but:

  1. Due to constraint propagation a logic programming language can cut the execution much earlier and make a huge performance difference.
  2. Many concepts are easier to express. (Maybe a matter of opinion?) Take the example you've given in the quote. How would you find all possible pairs of lists that concatenated gives you a third list you've supplied?
  3. It's much easier to switch from one kind of lookup to another. How would you change the code in my point 2 to suddenly just check "which list A should i begin with if i want to concatenate it with a given B to result in a given C". Probably a lot. In logic programming basically just change one of the symbols into a variable.
Licensed under: CC-BY-SA with attribution
scroll top