This week, I want to look at coalgebras. We already saw, in the case of products and coproducts, how given a structure in category theory, you can define a dual structure by reversing the directions of all the arrows. So it is with algebras and coalgebras.
Recall that an algebra consisted of a k-vector space A together with linear functions
unit :: k -> A
mult :: A⊗A -> A
satisfying two commutative diagrams, associativity and unit.
Well, a coalgebra consists of a k-vector space C together with two linear functions:
counit :: C -> k
comult :: C -> C⊗C
satisfying the following two commutative diagrams:
This diagram is actually shorthand for the following diagram:
These are just the associativity and unit diagrams, but with arrows reversed and relabeled.
Recall that when we say that a diagram commutes, we mean that it doesn't matter which way you follow the arrows, you end up with the same result. In other words, what these diagrams are saying is:
comult⊗id . comult == id⊗comult . comult (Coassoc)
counit⊗id . comult == unitInL (Left counit)
id⊗comult . comult == unitInR (Right counit)
(where unitInL, unitInR are the isomorphisms that we defined in here.)
In HaskellForMaths, recall that we work with free vector spaces over a basis type, so the definition comes out slightly different:
module Math.Algebras.Structures where ... class Coalgebra k c where counit :: Vect k c -> k comult :: Vect k c -> Vect k (Tensor c c)
What this definition really says is that c is the basis for a coalgebra. As before, we could try using type families to make this look more like the mathematical definition:
type TensorProd k u v = (u ~ Vect k a, v ~ Vect k b) => Vect k (Tensor a b) class Coalgebra2 k c where counit2 :: c -> k comult2 :: c -> TensorProd k c c
In this definition, c is the coalgebra itself. However, I'm not going to pursue this approach for now.
What do coalgebras look like? Well, they look a bit like algebras would, if you looked at them through the wrong end of the telescope.
More specifically, given any basis b, define a dual basis as follows:
newtype Dual basis = Dual basis deriving (Eq,Ord) instance Show basis => Show (Dual basis) where show (Dual b) = show b ++ "'"
(For those who know what a dual vector space is - this is it. For those who don't, I'll explain in a minute.)
Then, given an Algebra instance on some finite-dimensional basis b, we can define a Coalgebra instance on Dual b as follows:
1. Write out the unit and mult operations in the algebra as matrices.
For example, in the case of the quaternions, we have
1 i j k 1 -> 1 0 0 0
1 i j k 1⊗1 -> 1 0 0 0 1⊗i -> 0 1 0 0 1⊗j -> 0 0 1 0 1⊗k -> 0 0 0 1 i⊗1 -> 0 1 0 0 i⊗i -> -1 0 0 0 i⊗j -> 0 0 0 1 i⊗k -> 0 0 -1 0 j⊗1 -> 0 0 1 0 j⊗i -> 0 0 0 -1 j⊗j -> -1 0 0 0 j⊗k -> 0 1 0 0 k⊗1 -> 0 0 0 1 k⊗i -> 0 0 1 0 k⊗j -> 0 -1 0 0 k⊗k -> -1 0 0 0
2. Then transpose these two matrices, and use them as the definitions for counit and comult, but replacing each basis element by its dual.
So, in the case of the quaternions, we would get
1' 1' -> 1 i' -> 0 j' -> 0 k' -> 0
1'⊗1' 1'⊗i' 1'⊗j' 1'⊗k' i'⊗1' i'⊗i' i'⊗j' i'⊗k' j'⊗1' j'⊗i' j'⊗j' j'⊗k' k'⊗1' k'⊗i' k'⊗j' k'⊗k' 1' -> 1 0 0 0 0 -1 0 0 0 0 -1 0 0 0 0 -1 i' -> 0 1 0 0 1 0 0 0 0 0 0 1 0 0 -1 0 j' -> 0 0 1 0 0 0 0 -1 1 0 0 0 0 1 0 0 k' -> 0 0 0 1 0 0 1 0 0 -1 0 0 1 0 0 0
In code, we get:
instance Num k => Coalgebra k (Dual HBasis) where counit = unwrap . linear counit' where counit' (Dual One) = return () counit' _ = zero comult = linear comult' where comult' (Dual One) = return (Dual One, Dual One) <+> (-1) *> ( return (Dual I, Dual I) <+> return (Dual J, Dual J) <+> return (Dual K, Dual K) ) comult' (Dual I) = return (Dual One, Dual I) <+> return (Dual I, Dual One) <+> return (Dual J, Dual K) <+> (-1) *> return (Dual K, Dual J) comult' (Dual J) = return (Dual One, Dual J) <+> return (Dual J, Dual One) <+> return (Dual K, Dual I) <+> (-1) *> return (Dual I, Dual K) comult' (Dual K) = return (Dual One, Dual K) <+> return (Dual K, Dual One) <+> return (Dual I, Dual J) <+> (-1) *> return (Dual J, Dual I) unwrap :: Num k => Vect k () -> k unwrap (V ) = 0 unwrap (V [( (),x)]) = x
(Recall that when we want to think of k as a vector space, we have to represent it as Vect k ().)
We should check that this does indeed define a coalgebra. It's clear, by definition, that counit and comult are linear, as required. Here's a quickcheck property for coassociativity and counit:
prop_Coalgebra x = ((comult `tf` id) . comult) x == (assocL . (id `tf` comult) . comult) x && -- coassociativity ((counit' `tf` id) . comult) x == unitInL x && -- left counit ((id `tf` counit') . comult) x == unitInR x -- right counit where counit' = wrap . counit wrap :: Num k => k -> Vect k () wrap 0 = zero wrap x = V [( (),x)]
(It's a bit awkward that we have to keep wrapping and unwrapping between k and Vect k (). I think we could have avoided this if we had defined counit to have signature Vect k c -> Vect k () instead of Vect k c -> k. To be honest, I think that is probably the right thing to do, so perhaps I'll change it in some future release of HaskellForMaths. What does anyone else think?)
Anyway, here's a quickCheck property to test that the dual quaternions are a coalgebra:
instance Arbitrary b => Arbitrary (Dual b) where arbitrary = fmap Dual arbitrary prop_Coalgebra_DualQuaternion x = prop_Coalgebra x where types = x :: Vect Q (Dual HBasis) > quickCheck prop_Coalgebra_DualQuaternion +++ OK, passed 100 tests.
So, given an algebra structure on some vector space (basis), we can define a coalgebra structure on the dual vector space (dual basis). But why did we use the dual? Why didn't we just use the above construction to define a coalgebra structure on the vector space itself? For example, for the quaternions, that would look like this:
instance Num k => Coalgebra k HBasis where counit = unwrap . linear counit' where counit' One = return () counit' _ = zero comult = linear comult' where comult' One = return (One,One) <+> (-1) *> ( return (I,I) <+> return (J,J) <+> return (K,K) ) comult' I = return (One,I) <+> return (I,One) <+> return (J,K) <+> (-1) *> return (K,J) comult' J = return (One,J) <+> return (J,One) <+> return (K,I) <+> (-1) *> return (I,K) comult' K = return (One,K) <+> return (K,One) <+> return (I,J) <+> (-1) *> return (J,I)
Well, we could indeed do that. However, it obscures the underlying mathematics. The point is that an algebra structure on a finite-dimensional vector space gives rise "naturally" to a coalgebra structure on the dual space. It is then also true that a finite-dimensional vector space is naturally isomorphic to its dual, which is I guess why the second construction works.
Okay, so why is the coalgebra structure on the dual vector space "natural"?
Well first, what is a dual vector space anyway? Given a vector space V, the dual space V* is Hom(V,k), the space of linear maps from V to k. If V is finite-dimensional with basis b, then as we have seen, we can define a basis Dual b for V* which is in one-to-one correspondence with b. Given a basis element ei, then the dual basis element Dual ei represents the linear map that sends ei to 1, and any other basis element ej, j /= i, to 0.
It is convenient to define a linear map called the evaluation map, ev: V*⊗V -> k, with ev(a⊗x) = a(x):
ev :: (Num k, Ord b) => Vect k (Tensor (Dual b) b) -> k
ev = unwrap . linear (\(Dual bi, bj) -> delta bi bj *> return ())
-- where delta i j = if i == j then 1 else 0
Then given an element a in V* = Vect k (Dual b), and an element x in V = Vect k b, we can evaluate a(x) by calling ev (a `te` x).
dual = fmap Dual
> ev $ dual e1 `te` (4 *> e1 <+> 5 *> e2)
> ev $ dual e2 `te` (4 *> e1 <+> 5 *> e2)
> ev $ dual (e1 <+> 2 *> e2) `te` (4 *> e1 <+> 5 *> e2)
Provided V is finite-dimensional, then every element of V* can be expressed in the form dual v, for some v in V.
If we want to turn an element of Vect k (Dual b) into a real Haskell function Vect k b -> k, we can use the following code:
reify :: (Num k, Ord b) => Vect k (Dual b) -> (Vect k b -> k)
reify a x = ev (a `te` x)
> let f = reify (dual e2)
> f (4 *> e1 <+> 5 *> e2)
Now, suppose that we have a linear map f: U -> V between vector spaces. This gives rise to a linear map f*: V* -> U*, by defining:
ev (f* a ⊗ x) = ev (a ⊗ f x)
It turns out that the matrix for f* will be the transpose of the matrix for f.
For, suppose f(ui) = sum [mij vj | j <- ...] and f*(vi*) = sum [m*ij uj* | j <- ...]. Then
ev (f* vk* ⊗ ui) = ev (vk* ⊗ f ui) -- definition of f*
=> ev (sum [m*kl ul* | l <- ...] ⊗ ui) = ev (vk* ⊗ sum [mij vj | j <- ...]) -- expanding f and f*
=> sum [m*kl ev(ul* ⊗ ui) | l <- ...] = sum [mij ev (vk* ⊗ vj) | j <- ...] -- linearity of ev
=> sum [m*kl (delta l i) | l <- ...] = sum [mij (delta k j) | j <- ...] -- definition of ev
=> m*ki = mik -- definition of delta
So m* is the transpose of m.
Hence we have a contravariant functor * from the category of finite-dimensional vector spaces to itself, which takes a space V to the dual space V*, and a linear map f: U -> V to the dual map f*: V* -> U*. ("Contravariant" just means that it reverses the directions of arrows.)
Hopefully this explains why it is natural to think of an algebra structure on V as giving rise to a coalgebra structure on V*: a coalgebra is basically an algebra but with arrows reversed - and in going from vector spaces to their duals, we reverse arrows.
By the way, the converse is also true: A coalgebra structure on V gives rise to an algebra structure on V*, in the same way.