I've read in a few places claims that equivalent functionality to ExistentialQuantification can be had using RankNTypes. Could someone provide an example of why this is or is not possible?

有帮助吗?

解决方案

Normally, all type variables in Haskell are implicitly universally quantified at the outermost scope of the type. RankNTypes allows a universal quantifier forall to appear nested, e.g. the type forall a b. (a -> a) -> b -> b is very different from forall b. (forall a. a -> a) -> b -> b.

There is a sense in which types on the left side of a function arrow are logically "negated", in roughly the same sense that (->) is logical implication. Logically, the universal and existential quantifiers are related by a De Morgan duality: (∃x. P(x)) is equivalent to ¬(∀x. ¬P(x)), or in other words "there exists an x such that P(x)" corresponds to "it is not the case that, for all x, P(x) is false".

So a forall on the left of a function arrow is "negated" and behaves like an existential. If you put the whole thing to the left of another function arrow it's double-negated and behaves as a universal quantifier again, modulo some fiddly details.

The same idea of negation applies to values as well, so to encode the type exists x. x we want:

  1. forall x. in contravariant (negated) position
  2. a value of that type x in covariant (positive) position.

Since the value must be inside the scope of the quantifier, our only choice is double-negation--a CPS transform, basically. To avoid restricting things otherwise, we'll then universally quantify over the type on the right of the function arrows. So exists x. x is translated to forall r. (forall x. x -> r) -> r. Compare the placement of the types and quantifiers here to the requirements above to verify that it meets the requirement.

In more operational terms this just means that given a function with the above type, because we give it a function with a universally quantified argument type, it can apply that function to any type x it likes, and since it has no other way of getting a value of type r we know it will apply that function to something. So x will refer to some type, but we don't know what--which is basically the essence of existential quantification.

In more practical, day to day terms, any universally quantified type variable can be regarded as existential if you're looking at it from the "other side" of a function type. Because the unification performed as part of type inference transcends quantifier scope, you can sometimes end up in a situation where GHC would have to unify a type variable in an outer scope with a quantified type from a nested scope, which is how you get compiler errors about escaping types and skolems and whatnot, the latter (I assume) being related to Skolem normal form.

The way this relates to data types using existentials is that while you can declare a type like this:

data Exists = forall a. Exists a

That represents an existential type, to get at the "existential type" you need to unwrap it by pattern matching:

unexist :: Exists -> r
unexist (Exists x) = foo x

But if you consider what the type of foo would have to be in this definition, you end up with something like forall a. a -> r, which is equivalent to the CPS-style encoding above. There's a close relationship between CPS transforms and Church encoding of data types, so the CPS form can also be seen as a reified version of the pattern match.

Finally, relating things back to logic--since that's where the term "existential quantifier" comes from--note that, if you think of being left of an arrow as negation and kinda squint to ignore the forall r. ... CPS cruft, these encodings of existential types are exactly the same thing as the De Morgan dualized form ¬(∀x. ¬P(x)) that was the starting point. So these really all are just different ways of looking at the same concept.

其他提示

Something I've found useful to understand C. A. McCann's point that the left hand side of the function arrow is "negated" is to look at this from the point of view of game semantics. For example, here is a very simple game semantics for first-order logic (stated very informally):

  • There are two players, #1 and #2
  • There are two roles: Proponent and Opponent.
  • Player #1 is the initial Proponent, Player #2 the initial Opponent.
  • For a proposition ∀x.A, Opponent chooses a t and we play A[x := t] (A with all free instances of x replaced with t).
  • For a proposition A && B, Opponent chooses to oppose one of the conjuncts.
  • For a proposition not A, the players switch roles and play A.
  • For an atomic proposition, we consult its truth value; Proponent wins iff it's true, Opponent wins iff it's false.

The point of games like these is that a proposition is logically valid iff Player #1 (the initial Proponent) has a winning strategy for the corresponding game, i.e., if Player #1 has a winning move no matter what choices Player #2 makes.

Note the role-switch in the negation rule. The effect of that rule should be intuitive.

Because of the propositions-as-types correspondence this can be translated to type theory. Let's reformulate the game in terms of types formed in terms of some set of atoms, , type variables, , and :

  • Two players; but we'll call them Runner and Crasher
  • Runner is initial Proponent, Crasher is initial Opponent
  • ∀x.a: Opponent picks an atom P, and we play a[x := P] (a with all free instances of x replaced by P]
  • a → b: Proponent chooses whether to oppose a (role switch) or propose b.
  • Atomic type: Proponent wins iff type is inhabited, Opponent wins iff it's uninhabited. (I assume we've determined that beforehand for all atoms; also that is inhabited and is not.)

Note that in the rule for a → b, if Proponent picks a then the roles switch: the initial Proponent must become the Opponent for a. This corresponds to the fact that the left hand side of the arrow is "negated". (The idea is that ⊥ → b is inhabited no matter what b may be, so if the Proponent of a → b has a winning strategy to oppose a, then a → b must be inhabited, so its Proponent wins.)

Again, the idea is that a type is inhabited iff Runner has a winning strategy, and uninhabited iff Crasher has a winning strategy.

Example game: ∀r. (∀a. (a → Int) → a → r) → r

  1. Opponent (Crasher) picks r := ⊥: (∀a. (a → Int) → a → ⊥) → ⊥
  2. Proponent (Runner) chooses to oppose the antecedent: ∀a. (a → Int) → a → ⊥
  3. Opponent (Runner) picks a := String: (String → Int) → String → ⊥
  4. Proponent (Crasher) chooses to oppose the antecendent: String → Int
  5. Proponent (Runner) chooses to propose the consequent: Int
  6. Int is inhabited; Proponent (Runner) wins.

Suppose we added a rule for . The rule would have to be like this:

  • ∃x.a: Proponent picks x := P, and we play a[x := P]

Now, we can use this to analyze McCann's assertion that ∀r. (∀a. a → r) → r is equivalent to ∃a.a by analyzing corresponding game trees. I'm not going to do show the whole game tree (or carefully prove the equivalence, really), but I'll show two illustrative games:

First game: ∃a.a

  1. Proponent (Runner) picks a := ⊤.
  2. is inhabited; Proponent (Runner) wins.

Second game: ∀r. (∀a. a → r) → r

  1. Opponent (Crasher) picks r := ⊥: (∀a. a → ⊥) → ⊥
  2. Proponent (Runner) chooses to oppose the antecedent: ∀a. a → ⊥
  3. Opponent (Runner) chooses a := ⊤: ⊤ → ⊥
  4. Proponent (Crasher) loses because they must choose between opposing or proposing .

Now, the informal observation I'm going to make here is that in both games Runner is the one who gets to pick the type to use for a. The equivalence between ∃a.a and ∀r. (∀a. a → r) → r really comes down to this: there is no strategy that allows Crasher to be the one who picks for a.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top