A while ago I looked at the concepts of an algebra and a coalgebra, and showed how to represent them in Haskell. I was intending to carry on to look at bialgebras and Hopf algebras, but I realised that I wasn't sufficiently clear in my own mind about the motivation for studying them. So, I confess that I have been keeping this blog just about afloat with filler material, while behind the scenes I've been writing loads of code - some fruitful, some not - to figure out how to motivate the concept of Hopf algebra.
Some concepts in mathematics have an obvious motivation at the outset: groups arise from thinking about (eg geometric) symmetry, (normed) vector spaces are a representation of physical space. (Both concepts turn out to have a usefulness that goes far beyond the initial intuitive motivation.) With Hopf algebras this doesn't really appear to be the case.
Hopf algebras appear to have been invented for narrow technical reasons to solve a specific problem, and then over time, it has turned out both that they are far more widespread and far more useful than was initially realised. So for me, there are two motivations for looking at Hopf algebras:
- There are some really interesting structures that are Hopf algebras. I'm particularly interested in combinatorial Hopf algebras, which I hope to cover in this blog in due course. But there are also examples in physics, associated with Lie algebras, for example.
- They have some interesting applications. I'm interested in the applications to knot theory (which unfortunately are a little technical, so I'll have to summon up some courage if I want to go over them in this blog). But again, there seem to be applications to physics, such as renormalization in quantum field theory (which I don't claim to understand, btw).
As a gentle introduction to Hopf algebras, I want to look at the group Hopf algebra, which has some claim to be the fundamental example, and provides a really good anchor for the concept.
So last time we looked at the group algebra - the free k-vector space of k-linear combinations of elements of a group G. We saw that many elements in the group algebra have multiplicative inverses, and we wrote code to calculate them. However, if you look again at that code, you'll see that it doesn't actually make use of group inverses anywhere. The code really only relies on the fact that finite groups are finite monoids. We could say that actually, the code we wrote is for finding inverses in finite monoid algebras. (But of course it just so happens that all finite monoids are groups.)
[Edit: Oops - that last claim in brackets is not true. Thanks to reddit readers for pointing it out.]
So the group algebra is an algebra, indeed a monoid algebra, but with the special property that the basis elements (the group elements) all have multiplicative inverses.
Now, mathematicians always prefer, if possible, to come up with definitions that are basis-independent. (In HaskellForMaths, we've been working with free vector spaces, which encourages us to think of vector spaces in terms of a particular basis. But many interesting properties of vector spaces are true regardless of the choice of basis, so it is better to find a way to express them that doesn't involve the basis.)
Can we find a basis-independent way to characterise this special property of the group algebra?
Well, our first step has to be to find a way to talk about the group inverse without having to mention group elements. We do that by encapsulating the group inverse in a linear map, which we call the antipode:
antipode x = nf (fmap inverse x)
So in other words the antipode operation just inverts each group element in a group algebra element (a k-linear combination of group elements). (The nf call just puts the vector in normal form.) For example:
> antipode $ p [[1,2,3]] + 2 * p [[2,3,4]]
[[1,3,2]]+2[[2,4,3]]
The key point about the antipode is that we now have a linear map on the group algebra, rather just an operation on the group alone. Although we defined the antipode in terms of a particular basis, linear maps fundamentally don't care about the basis. If you choose some other basis for the group algebra, I can tell you how the antipode transforms your basis elements.
Ok, so what we would like to do is find a way to express the group inverse property (ie the property that every group element has an inverse) as a property of the group algebra, in terms of the antipode.
Well, no point beating about the bush. Define:
trace (V ts) = sum [x | (g,x) <- ts]
diag = fmap (\g -> (g,g))
(So in maths notation, diag is the linear map which sends g to g⊗g.)
Note that these are both linear maps. In particular, once again, although we have defined them in terms of our natural basis (the group elements), as linear maps they don't really care what basis we choose.
Then it follows from the group inverse property that the following diagram commutes:
Why will it commute? Well, think about what happens to a group element, going from left to right:
- Going along the top, we have g -> g⊗g -> g⊗g-1 -> g*g-1 = 1
- Going along the middle, we have g -> 1 -> 1
- Going along the bottom, we have g -> g⊗g -> g-1⊗g -> g-1*g = 1
All of the maps are linear, so they extend from group elements to arbitrary k-linear combinations of group elements.
(Shortly, I'll demonstrate that it commutes as claimed, using a quickcheck property.)
We're nearly there. We've found a way to express the special property of the group algebra, in a basis-independent way. We can say that what is special about the group algebra is that there is a linear antipode map, such that the above diagram commutes.
[In fact I think it may be true that if you have a monoid algebra such that the above diagram commutes, then it follows that the monoid is a group. This would definitely be true if we constrained antipode to be of the form (fmap f), for f a function on the monoid, but I'm not absolutely sure that it's true if antipode is allowed to be an arbitrary linear function.]
Now, the concept of a Hopf algebra is just a slight generalization of this.
Observe that trace and diag actually define a coalgebra structure:
- diag is clearly coassociative
- the left and right counit properties are also easy to check
So we can define:
instance (Eq k, Num k) => Coalgebra k (Permutation Int) where counit = unwrap . linear counit' where counit' g = 1 -- trace comult = fmap (\g -> (g,g)) -- diagonal
(In fact, trace and diag define a coalgebra structure on the free vector space over any set. Of course, some free vector spaces also have other more interesting coalgebra structures.)
Let's just quickcheck:
> quickCheck (prop_Coalgebra :: GroupAlgebra Q -> Bool)
+++ OK, passed 100 tests.
So for the definition of a Hopf algebra, we allow the antipode to be defined relative to other coalgebra structures besides the trace-diag structure (with some restrictions to be discussed later). So a Hopf algebra is a vector space having both an algebra and a coalgebra structure, such that there exists an antipode map that makes the following diagram commute:
Thus we can think of Hopf algebras as a generalisation of the group algebra. As we'll see (in future posts), there are Hopf algebras with rather more intricate coalgebra structures and antipodes than the group algebra.
Here's a Haskell class for Hopf algebras:
class Bialgebra k b => HopfAlgebra k b where antipode :: Vect k b -> Vect k b
(A bialgebra is basically an algebra plus coalgebra - but with one more condition that I'll explain in a minute.)
We've already seen the antipode for the group algebra, but here it is again in its proper home, as part of a Hopf algebra instance:
instance (Eq k, Num k) => HopfAlgebra k (Permutation Int) where antipode = nf . fmap inverse
And here's a quickCheck property:
prop_HopfAlgebra x = (unit . counit) x == (mult . (antipode `tf` id) . comult) x && (unit . counit) x == (mult . (id `tf` antipode) . comult) x > quickCheck (prop_HopfAlgebra :: GroupAlgebra Q -> Bool) +++ OK, passed 100 tests.
So there you have it. That's what a Hopf algebra is.
Except that I've cheated slightly. What I've defined so far is actually only a weak Hopf algebra. There is one other condition that is needed, called the Hopf compatibility condition. This requires that the algebra and coalgebra structures are "compatible" in the following sense:
- counit and comult are algebra morphisms
- unit and mult are coalgebra morphisms
I don't want to dwell too much on this. It seems a pretty reasonable requirement, although other compatibility conditions are possible (eg Frobenius algebras). An algebra plus coalgebra satisfying these conditions (even if it doesn't have an antipode) is called a bialgebra. And it turns out that the group algebra is one.
> quickCheck (prop_Bialgebra :: (Q, GroupAlgebra Q, GroupAlgebra Q) -> Bool)
+++ OK, passed 100 tests.
Is it true that all finite monoids are groups?
ReplyDeleteIf a monoid is a set with an associative binary operation having an identity element, how about
with S = {1, 2, 3, 4} and a * b = max {a, b} ?
It has identity 1 and max is associative...
No, that was a temporary hallucination on my part. Sorry!
Delete(It is however true that a finite monoid with left and right cancellation is a group - which is kind of how I was seeing it in my head.)