質問

So we have Category of Hask, where:

  • Types are the objects of the category
  • Functions are the morphisms from object to object in the category.

Similarly for Functor we have:

  • a Type constructor as the mapping of objects from one category to another
  • fmap for the mapping of morphisms from one category to another.

Now, when we write program we basically transform values (not types) and it seems that the Category of Hask doesn't talk about values at all. I tried to fit values in the whole equation and came up with the following observation:

  • Each Type is a category itself. Ex: Int is a category of all integers.
  • Functions from a value to another value of same type are morphism of the category. Eg: Int -> Int
  • Functions from one value to another value of different type are functor for mapping values of one type to another.

Now my question is - Do values even make sense in the Category of Hask (or in general category theory)? If yes then any reference to read about it OR if not, then any reason for that.

I hope the question makes sense :)

役に立ちましたか?

解決

(I'll use words with their meaning from maths/category theory rather than programming, unless I mark it as code.)

One category at a time

One of the big ideas of category theory is to treat large complex things as a point, so, true to form the set/group/ring/class/category of all integers is considered a single point when you're thinking of the category Hask.

Similarly, you could have a very complicated function on integers, but it's just considered a single element (point/arrow) of a collection (set/class) of morphisms.

The first thing you do in category theory is ignore the detail. So the category Hask doesn't care that Int can be considered a category - that's at a different level. Int is just a point (object) in Hask.

One level down

Every monoid is a category with one object. Let's use that.

How are the integers a category?

There's more than one answer to this (since the integers are a monoid under addition and a monoid under multiplication). Let's do addition:

You can consider the integers as a category with a single object, and the morphisms are functions such as (+1), (+2), (subtract 4).

You have to hold in your head that I'm considering the integer 7 as the number 7 but using the representation (+7) to make it appear to be a category. The laws of category theory deliberately don't say your morphisms have to be functions, but it's clearer that something's a category if it has the structure of a set of functions containing identity and closed under composition.

Any monoid makes a single-object category in the same way as we've just done with the integers.

Functors from the integers?

A function f from the integers as a category under the operation +, to some other type with an operation £ that forms a category could only be a functor if you had f(x+y) = f(x) £ f(y). (This is called a monoid homomorphism). Most functions aren't morphisms.

Example morphism

Strings are a monoid under ++, so they're a category.

len :: String -> Int
len = length

len is a monoid morphism from String to Int, because len (xs ++ ys) = len xs + len ys, so if you're considering (String,++) and (Int,+) as category, len is a functor.

Example non-morphism

(Bool,||) is a monoid, with False as the identity, so it's a one-object category. The function

quiteLong :: String -> Bool
quiteLong xs = length xs > 10

isn't a morphism because quiteLong "Hello " is False and quiteLong "there!" is also False, but quiteLong ("Hello " ++ "there!") is True, and False || False is not True.

Because quiteLong isn't a morphism, it's not a functor either.

What's your point, Andrew?

My point is that some Haskell types can be considered categories, but not all functions between them are morhpisms.

We don't think of the categories at different levels at the same time (unless you're using both categories for some weird purpose), and there's deliberately no theoretical interaction between the levels, because there's deliberately no detail on the objects and morphisms.

This is partly because category theory took off in maths to provide a language to describe Galois theory's lovely interaction between finite groups/subgroups and fields/field extensions, two apparently completely different structures that turn out to be closely related. Later, homology/homotopy theory made functors between topological spaces and groups that turn out to be both fascinating and useful, but the main point is that the objects and the morphisms are allowed to be very different to each other in the two categories of a functor.

(Normally category theory comes into Haskell in the form of a functor from Hask to Hask, so in practice in functional programming the two categories are the same.)

So... what exactly is the answer to the original question?

  • Each Type is a category itself. Ex: Int is a category of all integers.

If you think of them in particular ways. See PhilipJF's answer for details.

  • Functions from a value to another value of same type are morphism of the category. Eg: Int -> Int

I think you mixed up the two levels. Functions can be morphisms in Hask, but not all functions Int -> Int are Functors under the addition structure, for example f x = 2 * x + 10 isn't a functor between Int and Int, so it's not a category morphism (another way of saying functor) from (Int,+) to (Int,+) but it is a morphism Int -> Int in the category Hask.

  • Functions from one value to another value of different type are functor for mapping values of one type to another.

No, not all functions are functors, for example quiteLong isn't.

Do values even make sense in the Category of Hask (or in general category theory)?

Categories don't have values in category theory, they just have objects and morphisms, which are treated like vertices and directed edges. The objects don't have to have values, and values aren't part of category theory.

他のヒント

As I commented on Andrew's answer (which is otherwise very good) , you can regard values in a type as objects of that type as a category, and regard functions as functors. For completeness, here are two ways:

Sets as Boring Categories

One of the most commonly used tools in mathematics is a "setoid"--that is, a set with an equivalence relation over it. We can think of this categorically via the concept of a "groupoid". A groupoid is a category where every morphism has an inverse such that f . (inv f) = id and (inv f) . f = id.

Why does this capture the idea of an equivalence relation? Well, an equivalence relation must be reflexive, but this is just the categorical claim that it has identity arrows, and it must be transitive, but this is just composition, finally it needs to be symmetric (which is why we added inverses).

The ordinary notion of equality in mathematics on any set thus gives rise to a groupoid structure: namely one where the only arrows are the identity arrows! This is often called the "discrete category".

It is left as an exercise to the reader to show that all functions are functors between discrete categories.

If you take this idea seriously, you begin to wonder about types with "equalities" that are not just identity. This would allow us to encode "quotient types." What is more, the groupoid structure has some more axioms (associativty, etc) that are claims about "equality of equality proofs" which leads down the road of n-groupoids and higher category theory. This is cool stuff, although to be usefull you need dependent types and some not fully worked out bits, and when it finally makes it into programming languages should allow

data Rational where
    Frac :: Integer -> Integer -> Rational
    SameRationa :: (a*d) ~ (b*c) -> (Frac a b) ~ (Frac c d)

Such that every time you patterned matched you would also have to match on the extra equality axiom and thus prove that your function respected the equivalence relation on Rational But don't worry about this. The take away is just that the "Discrete category" interpretation is a perfectly good one.

Denotational Approaches

Every type in Haskell is inhabited by an extra value, namely undefined. What is going on with this? Well, we could define a partial order on each type related to how "defined" a value is, such that

forall a. undefined <= a

but also things like

forall a a' b b'. (a <= a') /\ (b <= b') -> ((a,b) <= (a',b'))

Undefined is less defined in that it refers to value that doesn't terminate (actually, the undefined function is implemented by throwing an exception in every haskell, but lets pretend it was undefined = undefined. You can't be sure something does not terminate. If you are given an undefined all you can do is wait and see. Thus, it could be anything.

A partial order gives rise to a category in a standard way.

Thus, every type gives rise to a category where the values are objects in this way.

Why are functions functors? Well, a function can't tell it has gotten an undefined because of the halting problem. As such, it either has to give back an undefined when it encounters one, or it has to produce the same answer no matter what it was given. It is left to you to show that really is a functor.

While there are some other, quite wonderful, answers here they all miss your first question somewhat. To be clear, values simply do not exist and have no meaning in the category Hask. That's just not what Hask is there to talk about.

The above seems a bit dumb to say or feel, but I bring it up because it's important to note that category theory provides just one lens for examining the much more complex interactions and structures available in something as sophisticated as a programming language. It's not fruitful to expect all of that structure to be subsumed by the rather simple notion of a category. [1]

Another way of saying this is that we're trying to analyze a complex system and it's sometimes useful to view it as a category in order to seek out interesting patterns. It's this mindset that lets us introduce Hask, check that it really forms a category, notice that Maybe seems to behave like a Functor, and then use all of those mechanics to write down coherence conditions.

fmap id = id
fmap f . fmap g = fmap (f . g)

These rules would make sense regardless of whether we introduce Hask, but by seeing them as simple consequences of a simple structure we can discover within Haskell we understand their importance.


As a technical note, the entirety of this answer assumes Hask is actually "platonic" Hask, i.e. we get to ignore bottom (undefined and non-termination) as much as we like. Without that almost the entire argument falls apart just a bit.


Let's examine those laws more closely since they seem to almost run against my initial statement---they're patently operating at the value level, but "values don't exist in Hask", right?

Well, one answer is to take a closer look at what a categorical functor is. Explicitly, it's mapping between two categories (say C and D) which takes objects of C to objects of D and arrows of C to arrows of D. It's worth noting that in general these "mapping"s are not categorical arrows—they simply form a relation between the categories and do not necessarily share in structure with the categories.

This is important because even considering Haskell Functors, endofunctors in Hask, we have to be careful. In Hask the objects are Haskell types and the arrows are Haskell functions between those types.

Let's look at Maybe again. If it's going to be an endofunctor on Hask then we need a way to take all types in Hask to other types in Hask. This mapping is not a Haskell function even though it might feel like one: pure :: a -> Maybe a doesn't qualify because it operates at the value level. Instead, our object mapping is Maybe itself: for any type a we can form the type Maybe a.

This already highlights the value of working in Hask without values---we really do want to isolate a notion of Functor which doesn't depend on pure.

We'll develop the rest of Functor by examining the arrow mapping of our Maybe endofunctor. Here we need a way to map the arrows of Hask to the arrows of Hask. Let's for now assume that this is not a Haskell function—it doesn't have to be—so to emphasize it we'll write it differently. If f is a Haskell function a -> b then Maybe[f] is some other Haskell function Maybe a -> Maybe b.

Now, it's hard to not skip ahead and just start calling Maybe[f] "fmap f", but we can do a bit more work before making that jump. Maybe[f] needs to have certain coherence conditions. In particular, for any type a in Hask we have an id arrow. In our metalanguage we might call it id[a] and we happen to know that it also goes by the Haskell name id :: a -> a. Altogether, we can use these to state the endofunctor coherence conditions:

For all objects in Hask a, we have that Maybe[id[a]] = id[Maybe a]. For any two arrows in Hask f and g, we have that Maybe[f . g] = Maybe[f] . Maybe[g].

The final step is to notice that Maybe[_] just happens to be realizable as a Haskell function itself as a value of the Hask object forall a b . (a -> b) -> (Maybe a -> Maybe b). That gives us Functor.


While the above was rather technical and complex, there's an important point in keeping the notions of Hask and categorical endofunctors straight and separate from their Haskell instantiation. In particular, we can discover all of this structure without introducing a need for fmap to exist as a real Haskell function. Hask is a category without introducing anything at all at the value level.

That's where the real beating heart of viewing Hask as a category lives. The notation that identifies endofunctors on Hask with Functor requires much more line blurring.

This line blurring is justified because Hask has exponentials. This is a tricky way of saying that there's a unification between whole bundles of categorical arrows and particular, special objects in Hask.

To be more explicit, we know that for any two objects of Hask, say a and b, we can talk about the arrows between those two objects, often denoted as Hask(a, b). This is just a mathematical set, but we know that there's another type in Hask which is closely related to Hask(a,b): (a -> b)!

So this is strange.

I originally declared that general Haskell values have absolutely no representation in the categorical presentation of Hask. Then I went on to demonstrate that we can do a whole lot with Hask by using only its categorical notions and without actually sticking those pieces inside of Haskell as values.

But now I'm noting that the values of a type like a -> b actually do exist as all of the arrows in the metalinguistic set Hask(a, b). That's quite a trick and it's exactly this metalinguistic blurring that makes categories with exponentials so interesting.

We can do a little better, though! Hask also has an terminal object. We can talk about this metalinguistically by calling it 0, but we also know of it as the Haskell type (). If we look at any Hask object a we know that there are a whole set of categorical arrows in Hask((), a). Further, we know that these correspond to the values of the type () -> a. Finally, since we know that given any function f :: () -> a we can immediately get an a by applying () one might want to say that the categorical arrows in Hask((), a) are exactly the Haskell values of type a.

Which ought to either be utterly confusing or incredibly mind-blowing.


I'm going to end this somewhat philosophically by sticking with my initial statement: Hask doesn't talk about Haskell values at all. It really doesn't as a pure category---categories are interesting precisely because they're very simple and thus don't need all these extra-categorical notions of types and values and typeOf inclusion and the like.

But I also, perhaps poorly, showed that even as a stricly just a category, Hask has something that looks very, very similar to all of the values of Haskell: the arrows of Hask((), a) for each Hask object a.

Philosophically we might argue that these arrows aren't really the Haskell values we're looking for---they're just stand-ins, categorical mocks. You might argue that they're different things but simply happen to be in one-to-one correspondence with Haskell values.

I actually think that's a really important idea to keep in mind. These two things are different, they just behave similarly.


Very similarly. Any category lets you compose arrows, and so let's assume we pick some arrow in Hask(a, b) and some arrow in Hask((), a). If we combine these arrows with category composition we get an arrow in Hask((), b). Turning this all on its head a bit we might say that what I just did was find a value of type a -> b, a value of type a, and then combined them to produce a value of type b.

In other words, if we look at things sideways we can see categorical arrow composition as a generalized form of function application.

This is what makes categories like Hask so interesting. Broadly, these kinds of categories are called Cartesian Closed Categories or CCCs. Due to having both initial objects and exponentials (which require products as well) they have structures which completely model typed lambda calculus.

But they still don't have values.

[1] If you're reading this before reading the rest of my answer, then keep reading. It turns out that while it's absurd to expect that to happen, it actually kind of does. If you're reading this after reading my whole answer then let's just reflect on how cool CCCs are.

There are several ways to put things in terms of categories. Specially programming languages, that turn out to be very rich constructions.

If we choose Hask category, we're just setting a level of abstraction. A level where is not so comfortable to talk about values.

However, constants can be modeled in Hask as an arrow from the terminal object () to the corresponding type. Then, for example:

  • True : () -> Bool
  • 'a' : () -> Char

You can check: Barr,Wells - Category Theory for Computing, section 2.2.

Any category with a terminal object (or with terminal object*s*) has so-called global elements (or points, or constants, also on Wikipedia, more can be found, for example, in the Awoday's book on Category Theory, see 2.3 Generalized elements) of objects which we can call values of this objects here, taking global elements as a natural and universal categorical notion for "values".

For example, Set has usual elements (of sets, objects of Set) as global elements, which means that elements of any set A can be viewed as different functions (morphisms of Set) {⋆} → A from a unit set {⋆} to this set A. For a finite set A with |A| = n there are n such morphisms, for an empty set {} there is no such morphisms {⋆} → {} in Set, so that {} "has no elements" and |{}| = 0, for singleton sets {⋆} ≊ {+} uniquely‎, so that |{⋆}| = |{+}| = 1, and so on. Elements or "values" of sets are really just functions from a singleton set (1, terminal object in Set), since there is an isomorphism A ≊ Hom(1, A) in Set (which is CCC, so that Hom is internal here and Hom(1, A) is an object).

So that global elements is a generalization of this notion of elements in Set to any category with terminal objects. It can be generalized further with generalized elements (in a category of sets, posets or spaces morphisms are determined by actions on points, but it's not always the case in a general category). In general, once we turn "values" (elements, points, constants, terms) to arrows of a category in question we can reason about them using the language of this particular category.

Similary, in Hask we have, e.g., true as ⊤ → Bool and false as ⊤ → Bool:

true :: () -> Bool
true = const True

false :: () -> Bool
false = const Frue

true ≠ false in a usual sence, also, we have a family of 's as ⊤ → Bool (undefined, error "...", fix, general recursion and so on):

bottom1 :: () -> Bool
bottom1 = const undefined

bottom2 :: () -> Bool
bottom2 = const $ error "..."

bottom3 :: () -> Bool
bottom3 = const $ fix id

bottom4 :: () -> Bool
bottom4 = bottom4

bottom5 :: () -> Bool
bottom5 = const b where b = b

...

⊥ ≠ false ≠ true and this is it, we can't find any other morphisms of the form ⊤ → Bool, so that , false and true is the only values of Bool which can be distinguished extensionally. Note that in Hask any object has values, i.e. inhabited, since there are always morphisms ⊤ → A for any type A, it makes Hask different from Set or any other non-trivial CCC (its internal logic is kinda boring, this is what Fast and Loose Reasoning is Morally Correct paper about, we need to look for a subset of Haskell which has a nice CCC with a sane logic).

Also, in Type Theory values are syntactically represented as terms which again has similar categorical semantics.


And if we talk about "platonic" (i.e. total, BiCCC) Hask, then here is a trivial proof of A ≊ Hom(1, A) in Agda (which captures that platonic traits well):

module Values where

open import Function
open import Data.Unit
open import Data.Product
open import Relation.Binary.PropositionalEquality

_≊_ : Set → Set → Set
A ≊ B = ∃ λ (f : A → B) → ∃ λ (f⁻¹ : B → A) → f⁻¹ ∘ f ≡ id × f ∘ f⁻¹ ≡ id

iso : ∀ {A} → A ≊ (⊤ → A)
iso = const , flip _$_ tt , refl , refl
ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top