The compositions of a number n are the different ways that it can be expressed as an ordered sum of positive integers. For example, the compositions of 4 are 1+1+1+1, 1+1+2, 1+2+1, 2+1+1, 1+3, 2+2, 3+1, 4. Equivalently, we can forget about the plus signs, and just consider the ordered lists of positive integers that sum to n. Here's the Haskell code:
compositions 0 = [[]] compositions n = [i:is | i <- [1..n], is <- compositions (n-i)] > :m Math.Algebras.Structures Math.Combinatorics.CombinatorialHopfAlgebra > compositions 4 [[1,1,1,1],[1,1,2],[1,2,1],[1,3],[2,1,1],[2,2],[3,1],[4]]
We will define a CHA QSym whose basis elements are (indexed by) compositions:
newtype QSymM = QSymM [Int] deriving (Eq) instance Ord QSymM where compare (QSymM xs) (QSymM ys) = compare (sum xs, xs) (sum ys, ys) instance Show QSymM where show (QSymM xs) = "M " ++ show xs qsymM :: [Int] -> Vect Q QSymM qsymM xs | all (>0) xs = return (QSymM xs) | otherwise = error "qsymM: not a composition"
We will use QSymM as the basis for a Hopf algebra, indexed by compositions. (In practice, we're going to stick with smallish compositions, as the calculations would take too long otherwise.) We form the free vector space over this basis. An element of the free vector space is a linear combination of compositions. For example:
> qsymM [1,2] + 2 * qsymM [3,1]
M [1,2]+2M [3,1]
The algebra structure on QSymM is similar to the algebra structure we saw last time on SSymM. Instead of shifted shuffles, we will use overlapping shuffles or quasi-shuffles. This means that when shuffling (x:xs) and (y:ys), then instead of just choosing between taking x first or taking y first, we also have a third choice to take them both and add them together.
quasiShuffles (x:xs) (y:ys) = map (x:) (quasiShuffles xs (y:ys)) ++ map ((x+y):) (quasiShuffles xs ys) ++ map (y:) (quasiShuffles (x:xs) ys) quasiShuffles xs [] = [xs] quasiShuffles [] ys = [ys]
For example:
> quasiShuffles [1,2] [3] [[1,2,3],[1,5],[1,3,2],[4,2],[3,1,2]]
For our algebra structure, we say that the product of two compositions is the sum of all quasi-shuffles of the compositions:
instance (Eq k, Num k) => Algebra k QSymM where unit x = x *> return (QSymM []) mult = linear mult' where mult' (QSymM alpha, QSymM beta) = sumv [return (QSymM gamma) | gamma <- quasiShuffles alpha beta]
It's fairly obvious that this is associative and satisfies the algebra requirements. (And there are quickCheck tests in the package to confirm it.)
The coalgebra structure on QSymM is also similar to the coalgebra structure on SSymM. (We'll see in due course that SSym and QSym are closely related.) For the comultiplication of a composition, we'll just use the sum of the deconcatenations of the composition (without the flattening that we did with SSymM):
instance (Eq k, Num k) => Coalgebra k QSymM where counit = unwrap . linear counit' where counit' (QSymM alpha) = if null alpha then 1 else 0 comult = linear comult' where comult' (QSymM gamma) = sumv [return (QSymM alpha, QSymM beta) | (alpha,beta) <- deconcatenations gamma]
For example:
> comult $ qsymM [1,2,3] (M [],M [1,2,3])+(M [1],M [2,3])+(M [1,2],M [3])+(M [1,2,3],M [])
(Recall that (x,y) should be read as x⊗y.)
This comultiplication, along with those we have seen for other combinatorial Hopf algebras, is obviously coassociative - but perhaps I should spell out what that means. Coassociativity says that:
(comult⊗id) . comult = (id⊗comult) . comult
In other words, if you split a composition in two, and then split the left part in two - in all possible ways - then you get the same sum of possibilities as if you had done the same but splitting the right part in two. This is obvious, because it's just the sum of possible ways of splitting the composition in three (modulo associativity).
Then for a coalgebra we also require:
(id⊗counit) . comult = id = (counit⊗id) . comult
But this is clear. For example:
((id⊗counit) . comult) [1,2,3] =
(id⊗counit) ([]⊗[1,2,3] + [1]⊗[2,3] + [1,2]⊗[3] + [1,2,3]⊗[]) =
0 + 0 + 0 + [1,2,3]
(modulo the isomorphism C⊗k = C)
Then, as we did previously, we can quickCheck that the algebra and coalgebra structures are compatible, and hence that we have a bialgebra. (See the detailed explanation last time: mainly it comes down to the fact that deconcatenating quasi-shuffles is the same as quasi-shuffling deconcatenations.)
instance (Eq k, Num k) => Bialgebra k QSymM where {}
As before, this is a connected graded bialgebra in an obvious way (using the sum of the composition as the grading), and hence it automatically has an antipode. In this case we can give an explicit expression for the antipode. The coarsenings of a composition are the compositions which are "more coarse" than the given composition, in the sense that they can be obtained by combining two or more adjacent parts of the given composition.
coarsenings (x1:x2:xs) = map (x1:) (coarsenings (x2:xs)) ++ coarsenings ((x1+x2):xs) coarsenings xs = [xs] -- for xs a singleton or null
For example:
> coarsenings [1,2,3] [[1,2,3],[1,5],[3,3],[6]]
Then the antipode can be defined as follows:
instance (Eq k, Num k) => HopfAlgebra k QSymM where antipode = linear antipode' where antipode' (QSymM alpha) = (-1)^length alpha * sumv [return (QSymM beta) | beta <- coarsenings (reverse alpha)]
Why does this work? Remember that - in the case of a combinatorial Hopf algebra, where counit picks out the empty structure - antipode has to perform the disappearing trick of making everything cancel out.
That is, we require that
mult . (id⊗antipode) . comult = unit . counit
where the right hand side is zero for any non-empty composition.
Now, in order to try to figure out how it works, I decided to go through an example. However, in the cold light of day I have to admit that perhaps this is one of those times when maths is not a spectator sport. So the rest of this blog post is "optional".
Ok, so let's look at an example:
[1,2,3]
-> (comult = deconcatenations)
[]⊗[1,2,3] + [1]⊗[2,3] + [1,2]⊗[3] + [1,2,3]⊗[]
-> (id⊗antipode = reversal coarsening of right hand side, with alternating signs)
- []⊗([3,2,1] + [5,1] + [3,3] + [6])
+ [1]⊗([3,2] + [5])
- [1,2]⊗[3]
+ [1,2,3]⊗[]
-> (mult = quasi-shuffles)
- [3,2,1] - [5,1] - [3,3] - [6]
+ [1,3,2] + [4,2] + [3,1,2] + [3,3] + [3,2,1] + [1,5] + [6] + [5,1]
- [1,2,3] - [1,5] - [1,3,2] - [4,2] - [3,1,2]
+ [1,2,3]
= 0
(The way I like to think of this is:
- comult splits the atom into a superposition of fragment states
- antipode turns the right hand part of each fragment pair into a superposition of anti-matter states
- mult brings the matter and anti-matter parts together, causing them to annihilate
However, I think that is just fanciful - there's no flash of light - so I'll say no more.)
So why does it work? Well, in the final sum, the terms have to cancel out in pairs. If we look at some of the terms, it appears that there are three possibilities:
Case 1:
Consider the case of [4,2]. It arises in two ways in the final sum:
123 -> 1,23 -> 1,32 -> 42
123 -> 12,3 -> -12,3 -> -42
In both cases, the 1 and the 3 are combined during the quasi-shuffle phase to make a 4. In order to be combined in this way, they have to end up on opposite sides of the split during the deconcatenation phase. Because there is a 2 separating them, there are two ways this can happen, with the 2 ending up on either the left or right hand side of the split. And then the alternating signs ensure that the two outcomes cancel out at the end.
(In this case there was just one part separating them, but the same thing happens if there are more. For example:
1234 -> 1,234 -> -1,432 -> -532 + other terms
1234 -> 12,34 -> 12,43 -> 532 + 523
1234 -> 123,4 -> -123,4 -> -523)
Case 2:
Consider the case of [3,3]. This arises in two ways in the final sum:
123 -> [],123 -> -[],33 -> -33
123 -> 1,23 -> 1,32 -> 33
In this case it is the 1 and the 2 that combine. Unlike the 42 case, in this case the parts that combine are adjacent to one another at the beginning. In the top interaction, they combine during the reverse coarsening phase, in the bottom interaction, they combine during the quasi-shuffle phase. If x and y are adjacent, then the only way they can combine during the quasi-shuffle phase is if the split happens between them during the deconcatenation phase. This will always cancel with the term we get by splitting just before both x and y, and then combining them during coarsening.
wxyz -> w,xyz -> +/- w,z(y+x) -> {wz}(y+x)
wxyz -> wx,yz -> -/+ wx,zy -> {wz}(x+y)
Case 3:
Finally, it may happen that x and y don't combine. This can happen in two different ways, depending whether x and y are adjacentparts or not.
If they're not adjacent, then we get a failed case 1. For example, the [3,1,2] and [1,3,2] terms arise when the 1 and 3 fail to combine into a 4:
123 -> 1,23 -> 1,32 -> 132 + 312
123 -> 12,3 -> -12,3 -> -132-312
If they're adjacent, we get a failed case 2. For example, the [3,2,1] terms arise when the 1 and 2 fail to combine into a 3:
123 -> [],123 -> -[],321 -> -321
123 -> 1,23 -> 1,32 -> 321
This isn't quite a proof, of course. In a longer composition, there might be an opportunity for more than one of these cases to happen at the same time, and we need to show how that all works out. Also, I'm not sure that the [1,2,3] term is either a failed case 1 or a failed case 2. I hope at least though that this analysis has shed some light on why it works. (Exercise: Complete the proof.)
No comments:
Post a Comment