Saturday, 23 April 2011

What is a Coalgebra?

Last time we saw how to define an algebra structure on a vector space, in terms of category theory. I think perhaps some readers wondered what we gained by using category theory. The answer may be: not much, yet. However, in due course, we would like to understand the connection between quantum algebra and knot theory, and for that, category theory is essential.

This week, I want to look at coalgebras. We already saw, in the case of products and coproducts, how given a structure in category theory, you can define a dual structure by reversing the directions of all the arrows. So it is with algebras and coalgebras.

Recall that an algebra consisted of a k-vector space A together with linear functions
unit :: k -> A
mult :: A⊗A -> A
satisfying two commutative diagrams, associativity and unit.

Well, a coalgebra consists of a k-vector space C together with two linear functions:
counit :: C -> k
comult :: C -> C⊗C
satisfying the following two commutative diagrams:

Coassociativity:

This diagram is actually shorthand for the following diagram:
The isomorphisms at the top are the assocL and assocR isomorphisms that we defined here.

Counit:


These are just the associativity and unit diagrams, but with arrows reversed and relabeled.

Recall that when we say that a diagram commutes, we mean that it doesn't matter which way you follow the arrows, you end up with the same result. In other words, what these diagrams are saying is:

comult⊗id . comult == id⊗comult . comult  (Coassoc)
counit⊗id . comult == unitInL             (Left counit)
id⊗comult . comult == unitInR             (Right counit)

(where unitInL, unitInR are the isomorphisms that we defined in here.)


In HaskellForMaths, recall that we work with free vector spaces over a basis type, so the definition comes out slightly different:

module Math.Algebras.Structures where

...

class Coalgebra k c where
    counit :: Vect k c -> k
    comult :: Vect k c -> Vect k (Tensor c c)

What this definition really says is that c is the basis for a coalgebra. As before, we could try using type families to make this look more like the mathematical definition:

type TensorProd k u v =
    (u ~ Vect k a, v ~ Vect k b) => Vect k (Tensor a b)

class Coalgebra2 k c where
    counit2 :: c -> k
    comult2 :: c -> TensorProd k c c

In this definition, c is the coalgebra itself. However, I'm not going to pursue this approach for now.


What do coalgebras look like? Well, they look a bit like algebras would, if you looked at them through the wrong end of the telescope.

More specifically, given any basis b, define a dual basis as follows:

newtype Dual basis = Dual basis deriving (Eq,Ord)

instance Show basis => Show (Dual basis) where
    show (Dual b) = show b ++ "'"

(For those who know what a dual vector space is - this is it. For those who don't, I'll explain in a minute.)

Then, given an Algebra instance on some finite-dimensional basis b, we can define a Coalgebra instance on Dual b as follows:

1. Write out the unit and mult operations in the algebra as matrices.

For example, in the case of the quaternions, we have
unit:

     1 i j k

1 -> 1 0 0 0

mult:

        1  i  j  k

1⊗1 ->  1  0  0  0
1⊗i ->  0  1  0  0
1⊗j ->  0  0  1  0
1⊗k ->  0  0  0  1
i⊗1 ->  0  1  0  0
i⊗i -> -1  0  0  0
i⊗j ->  0  0  0  1
i⊗k ->  0  0 -1  0
j⊗1 ->  0  0  1  0
j⊗i ->  0  0  0 -1
j⊗j -> -1  0  0  0
j⊗k ->  0  1  0  0
k⊗1 ->  0  0  0  1
k⊗i ->  0  0  1  0
k⊗j ->  0 -1  0  0
k⊗k -> -1  0  0  0

2. Then transpose these two matrices, and use them as the definitions for counit and comult, but replacing each basis element by its dual.

So, in the case of the quaternions, we would get
counit:

      1'

1' -> 1
i' -> 0
j' -> 0
k' -> 0

comult:

     1'⊗1' 1'⊗i' 1'⊗j' 1'⊗k' i'⊗1' i'⊗i' i'⊗j' i'⊗k' j'⊗1' j'⊗i' j'⊗j' j'⊗k' k'⊗1' k'⊗i' k'⊗j' k'⊗k'

1' ->  1     0     0     0      0    -1     0     0     0     0    -1     0     0     0     0    -1
i' ->  0     1     0     0      1     0     0     0     0     0     0     1     0     0    -1     0
j' ->  0     0     1     0      0     0     0    -1     1     0     0     0     0     1     0     0
k' ->  0     0     0     1      0     0     1     0     0    -1     0     0     1     0     0     0


In code, we get:

instance Num k => Coalgebra k (Dual HBasis) where
    counit = unwrap . linear counit'
        where counit' (Dual One) = return ()
              counit' _ = zero
    comult = linear comult'
        where comult' (Dual One) = return (Dual One, Dual One) <+>
                  (-1) *> ( return (Dual I, Dual I) <+> return (Dual J, Dual J) <+> return (Dual K, Dual K) )
              comult' (Dual I) = return (Dual One, Dual I) <+> return (Dual I, Dual One) <+>
                  return (Dual J, Dual K) <+> (-1) *> return (Dual K, Dual J)
              comult' (Dual J) = return (Dual One, Dual J) <+> return (Dual J, Dual One) <+>
                  return (Dual K, Dual I) <+> (-1) *> return (Dual I, Dual K)
              comult' (Dual K) = return (Dual One, Dual K) <+> return (Dual K, Dual One) <+>
                  return (Dual I, Dual J) <+> (-1) *> return (Dual J, Dual I)

unwrap :: Num k => Vect k () -> k
unwrap (V []) = 0
unwrap (V [( (),x)]) = x

(Recall that when we want to think of k as a vector space, we have to represent it as Vect k ().)


We should check that this does indeed define a coalgebra. It's clear, by definition, that counit and comult are linear, as required. Here's a quickcheck property for coassociativity and counit:

prop_Coalgebra x =
    ((comult `tf` id) . comult) x == (assocL . (id `tf` comult) . comult) x  && -- coassociativity
    ((counit' `tf` id) . comult) x == unitInL x                              && -- left counit
    ((id `tf` counit') . comult) x == unitInR x                                 -- right counit
        where counit' = wrap . counit

wrap :: Num k => k -> Vect k ()
wrap 0 = zero
wrap x = V [( (),x)]


(It's a bit awkward that we have to keep wrapping and unwrapping between k and Vect k (). I think we could have avoided this if we had defined counit to have signature Vect k c -> Vect k () instead of Vect k c -> k. To be honest, I think that is probably the right thing to do, so perhaps I'll change it in some future release of HaskellForMaths. What does anyone else think?)


Anyway, here's a quickCheck property to test that the dual quaternions are a coalgebra:

instance Arbitrary b => Arbitrary (Dual b) where
    arbitrary = fmap Dual arbitrary

prop_Coalgebra_DualQuaternion x = prop_Coalgebra x
    where types = x :: Vect Q (Dual HBasis)

> quickCheck prop_Coalgebra_DualQuaternion
+++ OK, passed 100 tests.


So, given an algebra structure on some vector space (basis), we can define a coalgebra structure on the dual vector space (dual basis). But why did we use the dual? Why didn't we just use the above construction to define a coalgebra structure on the vector space itself? For example, for the quaternions, that would look like this:

instance Num k => Coalgebra k HBasis where
    counit = unwrap . linear counit'
        where counit' One = return ()
              counit' _ = zero
    comult = linear comult'
        where comult' One = return (One,One) <+> (-1) *> ( return (I,I) <+> return (J,J) <+> return (K,K) )
              comult' I = return (One,I) <+> return (I,One) <+> return (J,K) <+> (-1) *> return (K,J)
              comult' J = return (One,J) <+> return (J,One) <+> return (K,I) <+> (-1) *> return (I,K)
              comult' K = return (One,K) <+> return (K,One) <+> return (I,J) <+> (-1) *> return (J,I)

Well, we could indeed do that. However, it obscures the underlying mathematics. The point is that an algebra structure on a finite-dimensional vector space gives rise "naturally" to a coalgebra structure on the dual space. It is then also true that a finite-dimensional vector space is naturally isomorphic to its dual, which is I guess why the second construction works.

Okay, so why is the coalgebra structure on the dual vector space "natural"?

Well first, what is a dual vector space anyway? Given a vector space V, the dual space V* is Hom(V,k), the space of linear maps from V to k. If V is finite-dimensional with basis b, then as we have seen, we can define a basis Dual b for V* which is in one-to-one correspondence with b. Given a basis element ei, then the dual basis element Dual ei represents the linear map that sends ei to 1, and any other basis element ej, j /= i, to 0.

It is convenient to define a linear map called the evaluation map, ev: V*⊗V -> k, with ev(a⊗x) = a(x):

ev :: (Num k, Ord b) => Vect k (Tensor (Dual b) b) -> k
ev = unwrap . linear (\(Dual bi, bj) -> delta bi bj *> return ())

-- where delta i j = if i == j then 1 else 0


Then given an element a in V* = Vect k (Dual b), and an element x in V = Vect k b, we can evaluate a(x) by calling ev (a `te` x).

For example:

dual = fmap Dual

> ev $ dual e1 `te` (4 *> e1 <+> 5 *> e2)
4
> ev $ dual e2 `te` (4 *> e1 <+> 5 *> e2)
5
> ev $ dual (e1 <+> 2 *> e2) `te` (4 *> e1 <+> 5 *> e2)
14


Provided V is finite-dimensional, then every element of V* can be expressed in the form dual v, for some v in V.

If we want to turn an element of Vect k (Dual b) into a real Haskell function Vect k b -> k, we can use the following code:

reify :: (Num k, Ord b) => Vect k (Dual b) -> (Vect k b -> k)
reify a x = ev (a `te` x)


For example:

> let f = reify (dual e2)
> f (4 *> e1 <+> 5 *> e2)
5



Now, suppose that we have a linear map f: U -> V between vector spaces. This gives rise to a linear map f*: V* -> U*, by defining:
ev (f* a ⊗ x) = ev (a ⊗ f x)
It turns out that the matrix for f* will be the transpose of the matrix for f.

For, suppose f(ui) = sum [mij vj | j <- ...] and f*(vi*) = sum [m*ij uj* | j <- ...]. Then

ev (f* vk* ⊗ ui) = ev (vk* ⊗ f ui)                                           -- definition of f*
=> ev (sum [m*kl ul* | l <- ...] ⊗ ui) = ev (vk* ⊗ sum [mij vj | j <- ...])  -- expanding f and f*
=> sum [m*kl ev(ul* ⊗ ui) | l <- ...] = sum [mij ev (vk* ⊗ vj) | j <- ...]   -- linearity of ev
=> sum [m*kl (delta l i) | l <- ...] = sum [mij (delta k j) | j <- ...]       -- definition of ev
=> m*ki = mik                                                                 -- definition of delta

So m* is the transpose of m.

Hence we have a contravariant functor * from the category of finite-dimensional vector spaces to itself, which takes a space V to the dual space V*, and a linear map f: U -> V to the dual map f*: V* -> U*. ("Contravariant" just means that it reverses the directions of arrows.)

Hopefully this explains why it is natural to think of an algebra structure on V as giving rise to a coalgebra structure on V*: a coalgebra is basically an algebra but with arrows reversed - and in going from vector spaces to their duals, we reverse arrows.

By the way, the converse is also true: A coalgebra structure on V gives rise to an algebra structure on V*, in the same way.

Saturday, 16 April 2011

What is an Algebra?

Over the last few months, we've spent somewhat longer than I originally expected looking at vector spaces, direct sums and tensor products. I hope you haven't forgotten that the reason we were doing this is because we want to look at quantum algebra, and "quantum groups". What are quantum groups? Well, one thing they are is algebras - so the next thing we need to do is define algebras.

Informally, an algebra is just a vector space which is also a ring (with unit) - or to put it another way, a ring (with unit) which is also a vector space. So a straightforward definition would be, A is an algebra if
(i) A is an additive group (this is required by both vector spaces and rings)
(ii) There is a scalar multiplication smult :: k×A -> A (satisfying some laws, as discussed in a previous post)
(iii) There is a multiplication mult :: A×A -> A, satisfying some laws:
- mult is associative: a(bc) = (ab)c
- mult distributes over addition: a(b+c) = (ab)+(ac), (a+b)c = (ac)+(bc)
(iv) There is a unit :: A, which is an identity for mult: 1a = a = a1

Some examples:
- C is an algebra over R (2-dimensional as a vector space)
- 2×2 matrices over a field k form a k-algebra (4-dimensional)
- polynomials over a field k form a k-algebra (infinite-dimensional)

It would be fairly straightforward to translate these definitions into a Haskell type class as they are. However, we're going to do things slightly differently, for two reasons.

Firstly, we would like to use the language of category theory, and define multiplication and unit in terms of linear maps (arrows in the category of vector spaces). Specifically, we define linear maps:
unit :: k -> A
mult :: A⊗A -> A

We then require that the following diagrams commute:

Associativity:



When we say that this diagram commutes, what it means is that it doesn't matter which way you decide to follow the arrows, the result is the same. Specifically:
mult . mult⊗id == mult . id⊗mult

Unit:



In this case there are two commuting triangles, so we are saying:
mult . unit⊗id == unitOutL
mult . id⊗unit == unitOutR
(where unitOutL, unitOutR are the relevant isomorphisms, which we defined last time.)

But hold on, haven't we forgotten one of the requirements? What about distributivity? Well, notice that the signature of mult was A⊗A -> A, not A×A -> A. A linear map from the tensor product is bilinear in each component (by definition of tensor product) - and bilinearity implies distributivity. So in this version, distributivity is built into the definition of mult. Neat, eh?

The second reason our definition will be different is that in HaskellForMaths, all our vector spaces are free vector spaces over some basis type b: V = Vect k b. Consequently, our algebras will be A = Vect k a, where a is a k-basis for the algebra. Because of this, it will turn out to be more natural to express some things in terms of a, rather than A.

With those forewarnings, here's the HaskellForMaths definition of an algebra:


module Math.Algebras.Structures where

import Math.Algebras.VectorSpace
import Math.Algebras.TensorProduct

class Algebra k a where
    unit :: k -> Vect k a
    mult :: Vect k (Tensor a a) -> Vect k a

In this definition, a represents the basis of the algebra, not the algebra itself, which is A = Vect k a. Recall that we defined a type Tensor a b, which is a basis for Vect k a ⊗ Vect k b.


If we wanted to stay a bit closer to the category theory definition, we could try continuing with the type family approach that we looked at last time:


type TensorProd k u v =
    (u ~ Vect k a, v ~ Vect k b) => Vect k (Tensor a b)

class Algebra2 k a where
    unit2 :: k -> a
    mult2 :: TensorProd k a a -> a

In this definition a is the algebra itself. I'm not going to pursue that approach any further here, but if anyone fancies giving it a go, I'd be interested to hear how they get on.


Anyway, as discussed, unit and mult are required to be linear maps. We could check this using QuickCheck, but in practice we will always define unit and mult in such a way that they are clearly linear.

However, we can write a QuickCheck property to check the other requirements:


prop_Algebra (k,x,y,z) =
    (mult . (id `tf` mult)) (x `te` (y `te` z)) ==
         (mult . (mult `tf` id)) ((x `te` y) `te` z)              && -- associativity
    unitOutL (k' `te` x) == (mult . (unit' `tf` id)) (k' `te` x)  && -- left unit
    unitOutR (x `te` k') == (mult . (id `tf` unit')) (x `te` k')     -- right unit
    where k' = k *> return ()

(Recall that when we wish to consider k as a vector space, we represent it as the free vector space Vect k ().)


When we have an algebra, then we have a ring, so we can define a Num instance:


instance (Num k, Eq b, Ord b, Show b, Algebra k b) => Num (Vect k b) where
    x+y = x <+> y
    negate x = neg x
    x*y = mult (x `te` y)
    fromInteger n = unit (fromInteger n)
    abs _ = error "Prelude.Num.abs: inappropriate abstraction"
    signum _ = error "Prelude.Num.signum: inappropriate abstraction"

This means that when we have an algebra, we'll be able to write expressions using the usual arithmetic operators.


Okay, so how about some examples of algebras. Well, we mentioned the complex numbers as an algebra over the reals, but let's go one better and define the quaternion algebra. This is a four dimensional algebra over any field k, which is generated by {1, i, j, k}, satisfying the relations i^2 = j^2 = k^2 = ijk = -1. (It follows, for example that (ijk)k = (-1)k, so ij(k^2) = -k, so ij = k.)

Here's the code. First, we define our basis. The quaternions are traditionally denoted H, after Hamilton, who discovered them:


data HBasis = One | I | J | K deriving (Eq,Ord)

type Quaternion k = Vect k HBasis

i,j,k :: Num k => Quaternion k
i = return I
j = return J
k = return K

instance Show HBasis where
    show One = "1"
    show I = "i"
    show J = "j"
    show K = "k"

instance (Num k) => Algebra k HBasis where
    unit x = x *> return One
    mult = linear m
         where m (One,b) = return b
               m (b,One) = return b
               m (I,I) = unit (-1)
               m (J,J) = unit (-1)
               m (K,K) = unit (-1)
               m (I,J) = return K
               m (J,I) = -1 *> return K
               m (J,K) = return I
               m (K,J) = -1 *> return I
               m (K,I) = return J
               m (I,K) = -1 *> return J

Note that unit and mult are both linear by definition.

Let's just check that the code works as expected:

> :l Math.Algebras.Quaternions
> i^2
-1
> j^2
-1
> i*j
k


Now, are we sure that the quaternions are an algebra? Well, it's clear from the definition that the left and right unit conditions hold - see the lines m (One,b) = m (b,One) = return b. But it's not obvious that the associativity condition holds, so perhaps we should quickCheck:


instance Arbitrary HBasis where
    arbitrary = elements [One,I,J,K]

prop_Algebra_Quaternion (k,x,y,z) = prop_Algebra (k,x,y,z)
    where types = (k,x,y,z) :: (Q, Quaternion Q, Quaternion Q, Quaternion Q)

> quickCheck prop_Algebra_Quaternion
+++ OK, passed 100 tests.


Ok, how about 2*2 matrices. These form a four dimensional algebra generated by the elementary matrices {e11, e12, e21, e22}, where eij is the matrix with a 1 in the (i,j) position, and 0s elsewhere:


data Mat2 = E2 Int Int deriving (Eq,Ord,Show)

instance Num k => Algebra k Mat2 where
   unit x = x *> V [(E2 i i, 1) | i <- [1..2] ]
   mult = linear mult' where
        mult' (E2 i j, E2 k l) = delta j k *> return (E2 i l)

delta i j | i == j    = 1
          | otherwise = 0

Notice the way that we only have to define multiplication on our basis elements, the elementary matrices eij, and the rest follows by bilinearity. Notice also that unit and mult are linear by definition. Let's just sanity check that this works as expected:

> :l Math.Algebras.Matrix
> unit 1 :: Vect Q Mat2
E2 1 1+E2 2 2
> let a = 2 *> return (E2 1 2) + 3 *> return (E2 2 1)
> a^2
6E2 1 1+6E2 2 2


It's straightforward to define an Arbitrary instance for Mat2, and quickCheck that it satisfies the algebra conditions.


Finally, what about polynomials? Let's go one better, and define polynomials in more than one variable. An obvious basis for these polynomials as a vector space is the set of monomials. For example, polynomials in x,y,z are a vector space on the basis { x^i y^j z^k | i <- [0..], j <- [0..], k <- [0..] }.

Recall that our vector space code requires that the basis be an Ord instance, so we need to define an ordering on monomials. There are many ways to do this. We'll use the graded lex or glex ordering, which says that monomials of higher degree sort before those of lower degree, and among those of equal degree, lexicographic (alphabetical) ordering applies.

Given any set X of variables, we can construct the polynomial algebra over X as the vector space with basis the monomials in X. In the example above, we had X = {x,y,z}. For this reason, we'll allow our glex monomials to be polymorphic in the type of the variables. (In practice though, as you will see shortly, we will often just use String as the type of our variables.)

So a glex monomial over variables v is basically just a list of powers of elements of v:

data GlexMonomial v = Glex Int [(v,Int)] deriving (Eq)
-- The initial Int is the degree of the monomial. Storing it speeds up equality tests and comparisons


For example x^3 y^2 would be represented as Glex 5 [("x",3),("y",2)].


instance Ord v => Ord (GlexMonomial v) where
    compare (Glex si xis) (Glex sj yjs) =
        compare (-si, [(x,-i) | (x,i) <- xis]) (-sj, [(y,-j) | (y,j) <- yjs])
-- all the minus signs are to make things sort in the right order

[There's a bug in the HaskellForMaths v0.3.2 version of this Ord instance - the above code, which fixes it, will be in the next release.]

I won't bore you with the Show instance - it's a bit fiddly.

The Algebra instance uses the fact that monomials form a monoid under multiplication, with unit 1. Given any monoid, we can form the free vector space having the monoid elements as basis, and then lift the unit and multiplication in the monoid into the vector space, thus forming an algebra called the monoid algebra. Here's the construction:


instance (Num k, Ord v) => Algebra k (GlexMonomial v) where
    unit x = x *> return munit
        where munit = Glex 0 []
    mult xy = nf $ fmap (\(a,b) -> a `mmult` b) xy
        where mmult (Glex si xis) (Glex sj yjs) = Glex (si+sj) $ addmerge xis yjs

(In the addmerge function, we ensure that provided the variables were listed in ascending order in both inputs, then they are still so in the output.)

Finally, here's a convenience function for injecting a variable into the polynomial algebra:

glexVar v = V [(Glex 1 [(v,1)], 1)]

Then for example, we can do the following:

type GlexPoly k v = Vect k (GlexMonomial v)

> let x = glexVar "x" :: GlexPoly Q String
> let y = glexVar "y" :: GlexPoly Q String
> let z = glexVar "z" :: GlexPoly Q String
> (x+y+z)^3
x^3+3x^2y+3x^2z+3xy^2+6xyz+3xz^2+y^3+3y^2z+3yz^2+z^3


I hope you were impressed at how easy that all was. The foundation of free vector spaces, tensor products and algebras is only around a hundred lines of code. It then takes just another dozen lines or so to define an algebra: just define a basis, and define how the basis elements multiply - the rest follows by linearity.

Followers