Question

I've searched around the Internet, and I can't find any explanations of CHI which don't rapidly degenerate into a lecture on logic theory which is drastically over my head. (These people talk as if "intuitionistic proposition calculus" is a phrase that actually means something to normal humans!)

Roughly put, CHI says that types are theorems, and programs are proofs of those theorems. But what the hell does that even mean??

So far, I've figured this out:

  • Consider id :: x -> x. Its type says "given that X is true, we can conclude that X is true". Seems like a reasonable theorem to me.

  • Now consider foo :: x -> y. As any Haskell programmer will tell you, this is impossible. You cannot write this function. (Well, without cheating anyway.) Read as a theorem, it says "given that any X is true, we can conclude that any Y is true". This is obviously nonsense. And, sure enough, you cannot write this function.

  • More generally, the function's arguments can be considered "this which are assumed to be true", and the result type can be considered "thing that is true assuming all the other things are". If there's a function argument, say x -> y, we can take that as an assumption that X being true implies that Y must be true.

  • For example, (.) :: (y -> z) -> (x -> y) -> x -> z can be taken as "assuming that Y implies Z, that X implies Y, and that X is true, we can conclude that Z is true". Which seems logically sensible to me.

Now, what the hell does Int -> Int mean?? o_O

The only sensible answer I can come up with is this: If you have a function X -> Y -> Z, then the type signature says "assuming that it's possible to construct a value of type X, and another of type Y, then it is possible to construct a value of type Z". And the function body describes exactly how you would do this.

That seems to make sense, but it's not very interesting. So clearly there must be more to it than this...

Was it helpful?

Solution

The Curry-Howard isomorphism simply states that types correspond to propositions, and values correspond to proofs.

Int -> Int doesn't really mean much interesting as a logical proposition. When interpreting something as a logical proposition, you're only interested in whether the type is inhabited (has any values) or not. So, Int -> Int just means "given an Int, I can give you an Int", and it is, of course, true. There are many different proofs of it (corresponding to various different functions of that type), but when taking it as a logical proposition, you don't really care.

That doesn't mean interesting propositions can't involve such functions; just that that particular type is quite boring, as a proposition.

For an instance of a function type that isn't completely polymorphic and that has logical meaning, consider p -> Void (for some p), where Void is the uninhabited type: the type with no values (except ⊥, in Haskell, but I'll get to that later). The only way to get a value of type Void is if you can prove a contradiction (which is, of course, impossible), and since Void means you've proved a contradiction, you can get any value from it (i.e. there exists a function absurd :: Void -> a). Accordingly, p -> Void corresponds to ¬p: it means "p implies falsehood".

Intuitionistic logic is just a certain kind of logic that common functional languages correspond to. Importantly, it's constructive: basically, a proof of a -> b gives you an algorithm to compute b from a, which isn't true in regular classical logic (because of the law of excluded middle, which will tell you that something is either true or false, but not why).

Even though functions like Int -> Int don't mean much as propositions, we can make statements about them with other propositions. For instance, we can declare the type of equality of two types (using a GADT):

data Equal a b where
    Refl :: Equal a a

If we have a value of type Equal a b, then a is the same type of b: Equal a b corresponds to the proposition a = b. The problem is that we can only talk about equality of types this way. But if we had dependent types, we could easily generalise this definition to work with any value, and so Equal a b would correspond to the proposition that the values a and b are identical. So, for instance, we could write:

type IsBijection (f :: a -> b) (g :: b -> a) =
    forall x. Equal (f (g x)) (g (f x))

Here, f and g are regular functions, so f could easily have type Int -> Int. Again, Haskell can't do this; you need dependent types to do things like this.

Typical functional languages are not really well-suited to writing proofs, not only because they lack dependent types, but also because of ⊥, which, having type a for all a, acts as a proof of any proposition. But total languages like Coq and Agda exploit the correspondence to act as proof systems as well as dependently-typed programming languages.

OTHER TIPS

Perhaps the best way to understand what it means is to start (or try) using types as propositions and programs as proofs. It is better to learn a language with dependent types, such as Agda (it's written in Haskell and similar to Haskell). There are various articles and courses on that language. Learn you an Agda is incomplete, but it's try to simplify things, just like LYAHFGG book.

Here is an example of a simple proof:

{-# OPTIONS --without-K #-} -- we are consistent

module Equality where

-- Peano arithmetic.
-- 
--   ℕ-formation:     ℕ is set.
-- 
--   ℕ-introduction:  o ∈ ℕ,
--                    a ∈ ℕ | (1 + a) ∈ ℕ.
-- 
data ℕ : Set where
  o : ℕ
  1+ : ℕ → ℕ

-- Axiom for _+_.
-- 
--   Form of ℕ-elimination.
-- 
infixl 6 _+_
_+_ : ℕ → ℕ → ℕ
o + m = m
1+ n + m = 1+ (n + m)

-- The identity type for ℕ.
-- 
infix 4 _≡_
data _≡_ (m : ℕ) : ℕ → Set where
  refl : m ≡ m

-- Usefull property.
-- 
cong : {m n : ℕ} → m ≡ n → 1+ m ≡ 1+ n
cong refl = refl

-- Proof _of_ mathematical induction:
-- 
--   P 0, ∀ x. P x → P (1 + x) | ∀ x. P x.
-- 
ind : (P : ℕ → Set) → P o → (∀ n → P n → P (1+ n)) → ∀ n → P n
ind P P₀ _ o = P₀
ind P P₀ next (1+ n) = next n (ind P P₀ next n)

-- Associativity of addition using mathematical induction.
-- 
+-associative : (m n p : ℕ) → (m + n) + p ≡ m + (n + p)
+-associative m n p = ind P P₀ is m
  where
    P : ℕ → Set
    P i = (i + n) + p ≡ i + (n + p)
    P₀ : P o
    P₀ = refl
    is : ∀ i → P i → P (1+ i)
    is i Pi = cong Pi

-- Associativity of addition using (dependent) pattern matching.
-- 
+-associative′ : (m n p : ℕ) → (m + n) + p ≡ m + (n + p)
+-associative′ o _ _ = refl
+-associative′ (1+ m) n p = cong (+-associative′ m n p)

There you can see (m + n) + p ≡ m + (n + p) proposition as type and its proof as function. There are more advanced techniques for such proofs (e.g., preorder reasoning, combinators in Agda is like tactics in Coq).

What else can be proven:

  • head ∘ init ≡ head for vectors, here.

  • Your compiler produce a program which execution gives the same value as the value obtained in the interpretation of the same (host) program, here, for Coq. This book is also a good reading on the topic of language modeling and program verification.

  • Anything else what can be proven in constructive mathematics, since Martin-Löf's type theory in its expressive power is equivalent to ZFC. In fact, Curry-Howard isomorphism can be extended to physics and topology and to algebraic topology.

The only sensible answer I can come up with is this: If you have a function X -> Y -> Z, then the type signature says "assuming that it's possible to construct a value of type X, and another of type Y, then it is possible to construct a value of type Z". And the function body describes exactly how you would do this. That seems to make sense, but it's not very interesting. So clearly there must be more to it than this...

Well, yes, there is quite a lot more, because it has a lot of implications and opens up a lot of questions.

First of all, your discussion of the CHI is framed exclusively in terms of implication/function types (->). You don't talk about this, but you have probably have seen also how conjunction and disjunction correspond to product and sum types respectively. But what about other logical operators like negation, universal quantification and existential quantification? How do we translate logical proofs involving these to programs? It turns out that it's roughly like this:

  • Negation corresponds to first-class continuations. Don't ask me to explain this one.
  • Universal quantification over propositional (not individual) variables corresponds to parametric polymorphism. So for example, the polymorphic function id really has the type forall a. a -> a
  • Existential quantification over propositional variables corresponds to a handful of things that have to do with data or implementation hiding: abstract data types, module systems and dynamic dispatch. GHC's existential types are related to this.
  • Universal and existential quantification over individual variables leads to dependent type systems.

Other than that, it also means that all sorts of proofs about logics instantly translate into proofs about programming languages. For example, the decidability of intuitionistic propositional logic implies termination of all programs in simply-typed lambda calculus.

Now, what the hell does Int -> Int mean?? o_O

It's a type, or alternatively a proposition. In f :: Int -> Int, (+1) names a "program" (in a specific sense that admits both of functions and constants as "programs", or alternatively a proof. The semantics of the language must either provide f as a primitive rule of inference, or demonstrate how f is a proof that can be constructed from such rules and premises.

These rules are often specified in terms of equational axioms that define the base members of a type and rules that allow you to prove what other programs inhabit that type. For example, switching from Int to Nat (natural numbers from 0 forward), we could have these rules:

  • Axiom: 0 :: Nat (0 is a primitive proof of Nat)
  • Rule: x :: Nat ==> Succ x :: Nat
  • Rule: x :: Nat, y :: Nat ==> x + y :: Nat
  • Rule: x + Zero :: Nat ==> x :: Nat
  • Rule: Succ x + y ==> Succ (x + y)

These rules are enough to prove many theorems about addition of natural numbers. These proofs will also be programs.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top