Wednesday, 18 November 2009

Three new modules in HaskellForMaths

It's been a little while since I posted here - and the reason is that I've been too busy writing code. The fruits of my labour are three new modules for HaskellForMaths, which I've uploaded in version 0.2.0. In due course, each of these deserves a blog post in its own right, but for the moment, I thought I'd give a quick overview.


In our investigation of graph automorphisms, we've come across the concept of a strong generating set (SGS) for a permutation group. This is a generating set of a special form, which makes certain calculations particularly easy - for example, calculating the order of the group (the number of elements it contains). Our graphAuts algorithm naturally produced a strong generating set for the group. But what if we are just given any old set of generators for a permutation group.

The random Schreier-Sims algorithm is a fast Monte-Carlo algorithm for finding a strong generating set, given a set of generators. Monte-Carlo means that there is a small probability that it will return the wrong answer - in this case, it might return an SGS for a subgroup of the group we are interested in, instead of the whole group. In practice, this isn't an issue. First, we can make the probability of error as small as we like. Second, we usually know the order of the group, so can check whether the SGS is right.

The main function provided in Math.Algebra.Group.RandomSchreierSims is sgs, which takes a set of generators and returns a strong generating set.

(We already have an sgs function in Math.Algebra.Group.SchreierSims. This is guaranteed to give the right answer - but can be significantly slower. So, you have a choice which one to use.)


A projective plane is an incidence structure of points and lines satisfying the following rules:
- Given any two distinct points, there is exactly one line which passes through both
- Given any two distinct lines, there is exactly one point which lies on both
- There exists a quadrangle: a set of four points, no three of which are collinear

(There is another definition of projective plane, in terms of designs, but we haven't covered those yet.)

For any prime power q = p^n, the projective geometry PG(2,Fq) over the finite field Fq is a projective plane. However, it turns out that there are projective planes which are not of this form, associated with algebraic systems called near-fields, which are nearly fields but not quite.

In Math.Projects.MiniquaternionGeometry I construct PG(2,F9), together with three "non-Desarguesian" planes of order 9, based on the near-field J9 of order 9. The name Miniquaternion Geometry comes from the book of that name by Room and Kirkpatrick, because the multiplicative structure of J9 is the same as that of the unit quaternions.


A latin square is an n*n grid, with each cell containing a letter from an alphabet of n letters, such that in each row and in each column, each letter appears exactly once. For example:

1 2 3 4
2 1 4 3
3 4 1 2
4 3 2 1

Two latin squares are orthogonal if, when you superimpose them, each of the n^2 possible letter-pairs appears exactly once in the grid. For example:

1a 2b 3c 4d
2d 1c 4b 3a
3b 4a 1d 2c
4c 3d 2a 1b

It turns out that the existence of mutually orthogonal n*n latin squares is related to the existence of projective planes of order n. Euler's thirty six officers problem asks for a pair of orthogonal latin squares of order 6. It turns out that there is no solution, and this is related to the fact that there is no projective plane of order 6.

(There are projective planes of order q for any prime power q = p^n, and these give rise to mutually orthogonal q*q latin squares. However 6 is not a prime power. It is not known whether there exist projective planes whose order is not a prime power, but it is known by exhaustive computer search that there is no projective plane of order 6, and no pair of mutually orthogonal latin squares of order 6.)

This module lets you construct mutually orthogonal latin squares from projective planes, and other nice stuff.

Monday, 26 October 2009

Simple groups, the atoms of symmetry

[New release, HaskellForMaths-0.1.9, available here. Contains documentation improvements, new version of graphAuts using equitable partitions, and the code used in this blog post.]

Over the last few months on this blog, we have been looking at symmetry. We started out by looking at symmetries of graphs, then more recently we've been looking at symmetries of finite geometries. Last time, I said that there was more to say about finite geometries. There is, but first of all I realised that I need to say a bit more about groups.

We've come across groups already many times. Whenever we have an object, then the collection of all symmetries of that object is called a group. (Recall that a symmetry is a change that leaves the object looking the same.) Suppose that we call this group G, and arbitrary elements in the group g and h. Then G satisfies the following properties:
- There is a binary operation, *, defined on G. (g*h is the symmetry that you get if you do g then h.)
- If g and h are in G, then so is g*h. (Doing one symmetry followed by another is again a symmetry.)
- There is an element 1 in G, the identity, such that 1*g = g = g*1. (1 is the symmetry "do nothing".)
- If g is in G, then there is an element g^-1 in G, called the inverse of g, such that g*g^-1 = g^-1*g = 1. (Doing g^-1 is the same as undoing g.)
- (g*h)*k = g*(h*k)

Mathematicians sometimes (usually) turn things the other way round: They define a group by the properties above, and then observe that the collection of symmetries of an object fits the definition. The problem with this is that it covers over the original intuitions behind the concept.

For us, the key point to take away from the definition is that groups are "closed". We've been looking at symmetries, represented by permutations. But symmetries and permutations just are the sort of thing that you can multiply (do one then another), that have an identity (do nothing), and that have inverses (undo). Those all come for free. So when we say that a collection of symmetries, or of permutations, is a group, the only new thing that we're claiming is that the collection is closed - for g, h in G, g*h, g^-1, and 1 are also in G.

Okay, so now we know what a group is. Now you know what mathematicians are like - as soon as they've discovered some new type of structure, they always ask themselves questions like the following:
- can we classify all structures of this type?
- can smaller structures of this type be combined into larger structures of the same type?
- can larger structures of this type be broken down into smaller structures?
- can we classify all "atomic" structures of this type, that can't be broken down any further?

What about groups? Well, the answer is, for finite groups, mostly yes. Specifically, groups are made out of "atoms" - but there isn't a complete understanding of all the different ways they can be built from the atoms. Anyway, for the moment I'm going to skip the first two questions (about building groups up), and concentrate on the last two (breaking them down).

Okay, so can groups be broken down into smaller groups? Well, groups can have smaller groups contained within them. For example, let's consider the group D10 - the symmetries of the pentagon. (Somewhat confusingly, it's called D10 rather than D5, because it has 10 elements.)

> :load Math.Algebra.Group.PermutationGroup
> mapM_ print $ elts $ _D 10

Now, any subset of these elements, which is closed, is again a group. For example, the set { [], [[1,2],[3,5]] } is closed - 1, and all products and inverses of elements in the set, are in the set. This is called a subgroup of D10.

Here's Haskell code to find all subgroups of a group. (The group is passed in as a list of generators, and the subgroups are returned as lists of generators.)

> subgps gs = [] : subgps' S.empty [] (map (:[]) hs) where
>     hs = filter isMinimal $ elts gs
>     subgps' found ls (r:rs) =
>         let ks = elts r in
>         if ks `S.member` found
>         then subgps' found ls rs
>         else r : subgps' (S.insert ks found) (r:ls) rs
>     subgps' found [] [] = []
>     subgps' found ls [] = subgps' found [] [l ++ [h] | l <- reverse ls, h <- hs, last l < h]

> -- g is the minimal elt in the cyclic subgp it generates
> isMinimal 1 = False
> isMinimal g = all (g <=) primitives -- g == minimum primitives
>     where powers = takeWhile (/=1) $ tail $ iterate (*g) 1
>           n = orderElt g -- == length powers + 1
>           primitives = filter (\h -> orderElt h == n) powers

By the way, most of the code we'll be looking at this week, including the above, should only be used with small groups.

Okay, so let's try it out:

> mapM_ print $ subgps $ _D 10

The last in the list is D10 itself, generated by a reflection and a rotation. Then there are five subgroups generated by five different reflections. (For example, recall that [[2,5],[3,4]] is the reflection that swaps 2 with 5, and 3 with 4 - in other words, reflection in the vertical axis.). Then there is also a subgroup generated by the rotation [[1,2,3,4,5]] - the clockwise rotation that moves 1 to 2, 2 to 3, 3 to 4, 4 to 5, and 5 to 1. Finally, [] is the subgroup with no generators, which by convention is the group consisting of just the single element 1.

So can we break a group down into its subgroups? Well, not quite. When we break something down, we should end up with two (or more) parts. Subgroups are parts. But we may not always be able to find another matching part to make up the group.

I'm probably being a bit cryptic. Let's consider an analogy. We know that 15 = 3*5. We also have 5 = 15/3. So 15 = 3 * (15/3). Given a group G, and a subgroup H, we would like to be able to build G as G = H * (G/H). I'm not going to dwell on the * in this statement. What about the / ?

Well, it turns out that we can form a quotient G/H of a group by a subgroup - but only if the subgroup satisfies some extra conditions.

Given any subsets X and Y of G, we can define a product XY:

> xs -*- ys = toListSet [x*y | x <- xs, y <- ys]

For X or Y consisting of just a single element, we can form the products xY = {x}Y, or Xy = X{y}:

> xs -*  y  = L.sort [x*y | x <- xs] -- == xs -*- [y]
> x   *- ys = L.sort [x*y | y <- ys] -- == [x] -*- ys

Now it turns out that we can use this multiplication to define a quotient G/H. We let the elements of G/H be the (right) "cosets" Hg, for g in G. For example:

> let hs = [1, p [[1,2],[3,5]]]
> let x = p [[1,2,3,4,5]]
> let y = p [[2,5],[3,4]]
> hs -* x
> hs -* y

Here we have taken a subgroup H in D10, and then found the cosets Hx, Hy, for a couple of elements x, y in D10.

Now, to get our quotient group working, what we'd like is to have (Hx) (Hy) = H(xy). Unfortunately, this isn't guaranteed to be true. For example:

> (hs -* x) -*- (hs -* y)
> hs -* (x*y)

Oh dear. But there are some subgroups for which this construction does work. Suppose that we had a subgroup K, such that for all g in G, g^-1 K g = K. Then Kx Ky = Kx (x^-1Kx)y = KKxy = Kxy. (KK=K follows from the fact that K is a subgroup, hence closed.)

A subgroup K of G such that g^-1 K g = K for all g in G is called a normal subgroup. For a normal subgroup, we can form the quotient G/K, and so we have broken G down into two parts, K and G/K.

Here's the code:

> isNormal gs ks = all (== ks') [ (g^-1) *- ks' -* g | g <- gs]
>     where ks' = elts ks

> normalSubgps gs = filter (isNormal gs) (subgps gs)

> quotientGp gs ks
>     | ks `isNormal` gs = gens $ toSn [action cosetsK (-* g) | g <- gs]
>     | otherwise = error "quotientGp: not well defined unless ks normal in gs"
>     where cosetsK = cosets gs ks

> gs // ks = quotientGp gs ks

For example:

> mapM_ print $ normalSubgps $ _D 10

Two of these are "trivial". [] is the subgroup {1}, which will always be normal. The last subgroup listed is D10 itself. A group will always be a normal subgroup of itself. So the only "proper" normal subgroup is [ [[1,2,3,4,5]] ] - the subgroup of rotations of the pentagon.

In this case, there are only two cosets:

> mapM_ print $ cosets (_D 10) [p [[1,2,3,4,5]]]

Not surprisingly then, the quotient group is rather simple:

> _D 10 // [p [[1,2,3,4,5]]]

Just to explain a little: In quotientGp gs ks, we form the cosets of ks, and then we look at the action of the gs on the cosets by right multiplication - g sends Kh to Khg. So the elements of the quotient group are permutations of the cosets. However, if we printed this out, it would be hard to see what was going on. So we call "toSn", which labels the cosets 1,2,..., and rewrites the elements as permutations of the numbers. In the case above, we see that the quotient group can be generated by a single element, which simply swaps the two cosets.

Let's look at another example. S 4 is the group of all permutations of [1..4] - which also happens to be the symmetry group of the tetrahedron. S 4 has lots of subgroups:

> mapM_ print $ subgps $ _S 4

However, only a few of them are normal:

> mapM_ print $ normalSubgps $ _S 4

The first two are 1 and S4 itself. The third is the group of rotations of the tetrahedron. The fourth is just the rotations which move all the points.

Why are these subgroups normal, but the others aren't? Well, let's have a look at some of the ones that aren't.

We have a subgroup [ [[1,2]] ], and another [ [[1,3]] ], and several others that "look the same" - they just swap two points. We also have a subgroup [ [[1,2,3]] ], a subgroup [ [[1,2,4]] ], and several others that "look the same" - they just just rotate three points while leaving the fourth fixed. In fact, in both cases we have a set of conjugate subgroups. What I mean is, for a subgroup to be normal, we required that g^-1 K g = K. If a subgroup is not normal, then we can find a g such that g^-1 H g /= H. Then g^-1 H g will also be a subgroup (exercise: why?), and it will "look the same" as H.

By contrast, a normal subgroup is often the only subgroup that "looks like that".

Okay, so we've seen how groups can be broken down into smaller groups. Now, what about "atoms"? Well, a group always has itself and 1 as normal subgroups. If it has other normal subgroups, then we can break it down as K, G/K. We can keep going, and see if either of these in their turn has non-trivial normal subgroups. Eventually, we will end up with groups which have no non-trivial normal subgroups. Such a group is called a simple group. This is a bit like a prime number - which has no other factors besides itself and 1. Mark Ronan, in Symmetry and the Monster, calls simple groups the "atoms of symmetry". They are the pieces out of which all symmetry groups are made.

(Note that starting from a group G, with normal subgroups K1, K2, ..., we have more than one choice of how to break it down. Luckily, it turns out that if we keep going, then at the end we will have the same collection of "atoms", no matter which order we chose.)

So, what do simple groups look like?

Well, first, here's some code to detect them:

> isSimple gs = length (normalSubgps gs) == 2

Then the finite simple groups are as follows:

- The cyclic groups of order p, p prime (the rotation group of a regular p-gon):

> _C n | n >= 2 = [p [[1..n]]]

For example:

> isSimple $ _C 5
> isSimple $ _C 6

Exercise: Why is C n simple when n is prime, and not when it is not?

- The alternating groups A n, n >= 5. (A n is the subgroup of S n consisting of those elements of S n which are "even" - see wikipedia.)

> isSimple $ _A 4
> isSimple $ _A 5

(As it happens, A4 is the group of rotations of the tetrahedron, and A5 is the group of rotations of the dodecahedron. Unfortunately, An, n>5, doesn't have any such intuitive interpretation.)

- The projective special linear groups PSL(n,Fq) (except for PSL(2,F2), PSL(2,F3) and PSL(3,F2)). This is the subgroup of PGL(n,Fq) consisting of projective transformations with determinant 1. (Recall that last time we looked at PΓL(n,Fq), the group of symmetries of the projective geometry PG(n-1,Fq), which includes both projective transformations and field automorphisms.)

- Several more infinite families of subgroups of PΓL(n,Fq), with geometric significance.

- Finally, there are 26 "sporadic" simple groups, which don't belong to any of the infinite families described above.

I should just say a thing or two about the classification of finite simple groups:
- The proof of the classification was a monster effort by a whole generation of group theorists. The proof runs to 10000 pages.
- The part that mathematicians find most interesting is the sporadic simple groups. In many mathematical classifications, one only finds infinite families (for example, the primes), so the sporadic groups feel like little miracles that have dropped out of the sky. Why are they there?

Two of the things I'm hoping to do in future blog posts is describe and construct some of the other "simple groups of Lie type" (subgroups of PΓL(n,Fq), and describe and construct some of the sporadic simple groups.

Thursday, 8 October 2009

Symmetries of PG(n,Fq)

Previously in this blog we looked at the affine geometries AG(n,Fq), and their symmetries. Following that, we looked at the points and lines in the projective geometries PG(n,Fq). This week I want to look at the symmetries of PG(n,Fq).

However, first, something I forgot to mention last week. When we looked at the points in PG(n,Fq), we saw that they can be thought of as the points of AG(n,Fq), plus some additional points "at infinity". What I forgot to do last week was to discuss how the lines in PG(n,Fq) relate to the lines in AG(n,Fq).

Here's PG(2,F3) again:

Recall that the points in PG(2,F3) are (by definition) the lines through the origin in F3^3, each of which can be represented, as here, by one of the non-zero points on it. (Note that for technical reasons, the coordinates are shown as zyx instead of xyz.) We see that these "points" of PG(2,F3) consist of something looking very much like AG(2,F3) - the blue points - together with some additional points, which are said to be "at infinity".

Then, the lines of PG(2,F3) are (by definition) the planes through 0 in F3^3. What is the relationship between lines in AG(2,F3) and "lines" in PG(2,F3)? Well, any line in (the copy of) AG(2,F3) (embedded in PG(2,F3)) is contained in a plane through 0 in F3^3. So each line in AG(2,F3) gives rise to a line in PG(2,F3). In PG(2,F3), however, the line will have an additional point "at infinity".

For example, there is a line in AG(2,F3) consisting of the points {(0,0),(1,0),(2,0)}. These embed into PG(2,F3) as {(1:0:0), (1:1:0), (1:2:0)}. We can ask what is the closure in PG(2,F3) of these points - the least "flat" (point, line, plane, etc) that contains them:

> closurePG [ [1,0,0],[1,1,0],[1,2,0] ] :: [[F3]]

So in PG(2,F3), the line gains a point at infinity, (0:1:0).

Similarly, there is a line in AG(2,F3) consisting of the points {(0,1),(1,1),(2,1)}. These embed into PG(2,F3) as {(1:0:1), (1:1:1), (1:2:1)}.

> closurePG [ [1,0,1],[1,1,1],[1,2,1] ] :: [[F3]]

Notice that both lines from AG(2,F3) gained the same point at infinity. This is because the two lines were parallel. In PG(2,F3), "parallel lines meet at infinity".

So each line in AG(2,F3) extends to a line in PG(2,F3), gaining an extra point at infinity. In addition to these lines, there is one more line in PG(2,F3) - a line at infinity consisting of the points {(0:0:1), (0:1:0), (0:1:1), (0:1:2)}, corresponding to the plane z = 0 in F3^3.

So, that was all stuff that I should really have talked about last time, but I forgot.


Okay, so back to the symmetries of PG(n,Fq). So we consider PG(n,Fq) as an incidence structure between points and lines, and we ask which permutations of the points preserve collinearity. That is, given a permutation g of the points of PG(n,Fq), we say that it is a symmetry (or automorphism, or collineation) of PG(n,Fq) if, whenever a and b are collinear points, then so are a^g and b^g, and vice versa.

We can find the symmetries of PG(n,Fq) by forming its incidence graph. This is the bipartite graph having as vertices: the points of PG(n,Fq) on the left (or in blue), the lines of PG(n,Fq) on the right (or in red), with an edge joining a point vertex to a line vertex if the point is incident with the line.

Here's the code:

incidenceGraphPG n fq = G vs es where
points = ptsPG n fq
lines = linesPG n fq
vs = L.sort $ map Left points ++ map Right lines
es = L.sort [ [Left x, Right b] | b <- lines, x <- closurePG b]

Let's look at an example. Here's PG(2,F2):

The lines in PG(2,F2), corresponding to planes through 0 in F2^3, are:

> mapM_ (print . closurePG) $ linesPG 2 f2

PG(2,F2) is called the Fano plane, and is often represented by the picture below:

You can easily check that this is right. Notice the embedded AG(2,F2) in the bottom left, with parallel lines meeting "at infinity" as expected.

The incidence graph of the Fano plane is called the Heawood graph, and it is often represented like this:

The blue vertices are the points of the Fano plane, the red vertices are the lines. For each red vertex, you can check that the three blue vertices it is connected to are indeed a line of the Fano plane.

Then we can find the automorphisms of the Fano plane as follows:

> let heawood = incidenceGraphPG 2 f2
> let auts = incidenceAuts heawood
> mapM_ print auts
> orderSGS auts

Now the symmetries of PG(n,Fq) actually have a very simple description in terms of matrices. Consider the invertible matrices of degree n+1 over Fq - the general linear group GL(n+1,Fq). These matrices act on vectors in Fq^(n+1) by multiplication, and in doing so, they permute the lines and planes through 0 in Fq^(n+1), whilst preserving containment of lines within planes. In other words, they permute the points and lines of PG(n,Fq) and preserve collinearity. However, any scalar matrix within GL(n+1,Fq) - that is, kI, where k is in Fq\{0} and I is the identity - acts as the identity on PG(n,Fq) - since if (x0:x1:...:xn) is a "point" of PG(n,Fq), then (kx0:kx1:...:kxn) represents the same point. So the group of projective transformations of PG(n,Fq) is actually GL(n+1,Fq) factored out by scalar multiplications. This group is called the projective general linear group, PGL(n+1,Fq).

If we are given a symmetry of PG(n,Fq), expressed as a permutation, then we can express it as a matrix by looking at what it does to the basis vectors. For example, consider the symmetry [[[0,0,1],[0,1,0]],[[1,0,1],[1,1,0]]] of PG(2,F2) from above. It sends [1,0,0] to [1,0,0], [0,1,0] to [0,0,1], and [0,0,1] to [0,1,0]. So it can be represented by the matrix
[1 0 0]
[0 0 1]
[0 1 0]

We already worked out the order of GL(n,Fq) in the orderGL function. There are q-1 non-zero scalars to factor out by. So the order of PGL(n,Fq) will be given by the following:

orderPGL n q = orderGL n q `div` (q-1)

Let's test it (remembering that we want PGL(n+1,Fq) for PG(n,Fq)):

> orderPGL 3 2
> orderSGS $ incidenceAuts $ incidenceGraphPG 2 f3
> orderPGL 3 3
> orderSGS $ incidenceAuts $ incidenceGraphPG 2 f4
> orderPGL 3 4

With F4, we get twice as many automorphisms as we might have expected. If you've been paying attention, you'll remember that this is because F4 has a field automorphism (the Frobenius automorphism) of order 2. The extended group, PGL(n,Fq) plus field automorphisms, is called PGammaL(n,Fq) (or better - but I'm not sure this will work for everybody - PΓL(n,Fq)).

Now, this is kind of a trivial point, but I think it's significant: PG(n,Fq) has more symmetries than AG(n,Fq). Somehow, by adding the points at infinity, PG(n,Fq) has made AG(n,Fq) "whole", making it more symmetrical.

Anyway, that'll do for now. Next week, just a few more odds and ends about finite geometries.

Wednesday, 30 September 2009

Finite geometries, part 4: Lines in PG(n,Fq)

[Apologies about the formatting - blogger seems to have won the struggle this time.]

Last time we saw that the points in the projective geometry PG(n,Fq) are defined to be the lines (through the origin) in the vector space Fq^(n+1). What about lines in PG(n,Fq)? Well, it's simple - they are defined as the planes (through 0) in Fq^(n+1).

Let's look at an example, to try to clarify our intuitions. Here are the points of PG(2,F3) again.

> :load Math.Combinatorics.FiniteGeometry
> ptsPG 2 f3

(Recall that each point in PG(2,F3) represents a line through 0 in F33. We have chosen the representative which has 1 as its first non-zero coordinate. This has the consequence that the coordinates in the diagram are best read back-to-front - they're zyx instead of xyz.)

So the lines in PG(2,F3) correspond to the planes through 0 in F33. For example, the plane x = 0 (the left hand face of the cube) gives rise to a line in PG(2,F3) containing the points 010, 100, 110, 120. The plane x = z (a diagonal cut through the cube from front left to back right) gives rise to the line containing the points 010, 101, 111, 121. Just to be clear, we are only looking at the planes in F33 that contain 0. So for example, 012, 101, 111, 121 - the plane x+z=2 - is not a line in PG(2,F3). Also, remember that F3 wraps around - so for example 010, 102, 112, 122 is a line in PG(2,F3).

Okay, onwards. As we did with AG(n,Fq), we can represent a line in PG(n,Fq) by any pair of points on it. We would then like to be able to do the following:
  1. Given a line of PG(n,Fq), list the points of PG(n,Fq) which are on it
  2. Given a line of PG(n,Fq), find a canonical representation for it
  3. Find all lines of PG(n,Fq)
(These are basically just the same things that we did when we looked at AG(n,Fq) a couple of weeks back.)

Okay, so suppose that we are given two distinct points of PG(n,Fq), representing a line, and we are asked to list all the points of PG(n,Fq) that are on the line. Well, the two points are also points in Fqn+1, and as they are distinct in PG(n,Fq), they are linearly independent in Fqn+1 - hence they generate a plane through 0. We are being asked to find all lines through 0 that are contained in the plane. Well, that's easy:
  1. Generate all linear combinations of the two points - all points in the plane in Fqn+1.
  2. Each non-zero point in the plane represents a line through 0, and hence a point of PG(n,Fq)
  3. However, we can filter out all except those that are in the canonical form for pts of PG(n,Fq) - having a 1 as their first non-zero coordinate - in order to avoid double counting the lines.
So we need an auxiliary function to detect whether a point is in canonical form - what I call "projective normal form" or PNF:

ispnf (0:xs) = ispnf xs
ispnf (1:xs) = True
ispnf _ = False

Then, given two distinct points, to list all points on the line they generate:

linePG [p1,p2] = toListSet $ filter ispnf [(a *> p1) <+> (b *> p2) | a <- fq, b <- fq]
where fq = eltsFq undefined

(Recall that c *> v is scalar multiplication of a vector, and u <+> v is addition of vectors. Note that in HaskellForMaths <= 0.1.9, you must use the more general "closurePG" function instead, which has the same behaviour.)

For example, here are the two lines of PG(2,F3) that we discussed earlier:

> linePG [[0,1,0],[1,0,0]] :: [[F3]]
> linePG [[0,1,0],[1,0,1]] :: [[F3]]

A line in PG(n,Fq) can be represented by any pair of distinct points on the line. Is there a canonical pair to pick? Yes, there is. Given any pair of points of PG(n,Fq), we can consider them as the rows of a matrix. Any matrix which can be obtained from this matrix by elementary row operations represents the same plane in Fqn+1. For our canonical form, we can take the reduced row echelon form for the matrix (see wikipedia). However, I'm not going to dwell on this - use the "reducedRowEchelonForm" function if you'd like to experiment.

What I'd like to consider instead is how we can list all the lines in PG(n,Fq). The answer is simply to list all the reduced row echelon forms. We can do this in two steps:
  1. List all the reduced row echelon "shapes"
  2. Fill in all possible combinations of values from Fq, to get all reduced row echelon forms
An example will make this clearer. Suppose we want to find the lines in PG(3,Fq). Thus we are looking for the 2-dimensional subspaces of Fq^4. The reduced row echelon shapes are the following:

[1 0 * *] [1 * 0 *] [1 * * 0]
[0 1 * *] [0 0 1 *] [0 0 0 1]

[0 1 0 *] [0 1 * 0] [0 0 1 0]
[0 0 1 *] [0 0 0 1] [0 0 0 1]

To get the reduced row echelon forms, we take each shape in turn, and fill in the stars with the values from Fq in all possible ways.

The Haskell code finds row echelon shapes for k-dimensional subspaces of Fq^n. (For lines, set k = 2):

data ZeroOneStar = Zero | One | Star deriving (Eq)

instance Show ZeroOneStar where
show Zero = "0"
show One = "1"
show Star = "*"

rrefs n k = map (rref 1 1) (combinationsOf k [1..n]) where
rref r c (x:xs) =
if c == x
then zipWith (:) (oneColumn r) (rref (r+1) (c+1) xs)
else zipWith (:) (starColumn r) (rref r (c+1) (x:xs))
rref _ c [] = replicate k (replicate (n+1-c) Star)
oneColumn r = replicate (r-1) Zero ++ One : replicate (k-r) Zero
starColumn r = replicate (r-1) Star ++ replicate (k+1-r) Zero

Thus we can calculate the shapes we saw above as follows:

> mapM_ print $ rrefs 4 2

Next, to find all lines in PG(n,Fq), we need to substitute values from Fq for the stars. The following code does the trick, although I suspect that there might be a better way to write it:

flatsPG n fq k = concatMap substStars $ rrefs (n+1) (k+1) where
substStars (r:rs) = [r':rs' | r' <- substStars' r, rs' <- substStars rs]
substStars [] = [[]]
substStars' (Star:xs) = [x':xs' | x' <- fq, xs' <- substStars' xs]
substStars' (Zero:xs) = map (0:) $ substStars' xs
substStars' (One:xs) = map (1:) $ substStars' xs
substStars' [] = [[]]

linesPG n fq = flatsPG n fq 1

For example:

> mapM_ print $ linesPG 2 f3

It occurs to me that I should explain why PG(n,Fq) is worth looking at. Well, I mentioned before that the symmetry groups of PG(n,Fq) are some of the "atoms of symmetry", from which all symmetry groups are composed. In addition, projective geometry is in fact more fundamental than affine geometry (at least from the point of view of algebraic geometry, although I'm not sure how you would justify that statement in general). Well, maybe you'll just have to take my word for it that PG(n,Fq) is a beautiful thing, for the moment.

Next time, symmetries of PG(n,Fq).

Friday, 25 September 2009

Finite geometries, part 3: Points in PG(n,Fq)

[New release HaskellForMaths-0.1.8 - contains bugfix to Fractional instance for ExtensionField, plus documentation improvements.]

Over the last two weeks, we have looked at the finite affine geometries AG(n,Fq). Hopefully, this has all been fairly straightforward - affine geometry is just the familiar Euclidean geometry of points and lines, but without angles or distances, and all we have done is replace the reals R by the finite fields Fq.

This week we're going to look at finite projective geometries. Now, I know from my own experience that it can take a little while to "get" projective geometry. There are probably several reasons for this, but one of them is that mathematicians define projective geometry one way, but then most of the time think about it in a different way. So, I'll do my best to explain it, but don't worry if you don't get it at first, one day it will all click into place.

Okay, so let's start with the points. The points of PG(n,Fq) are defined to be the lines through the origin (ie the one-dimensional subspaces) in Fq^(n+1). That should sound a bit strange at first reading - how can the points (in PG(n,Fq)) be lines (in Fq^(n+1))? Bear with me - we'll see why it makes sense to think of them as points.

For example, the points of PG(2,F3) are the lines through 0 in F3^3. So the line {(0,0,0), (0,1,0), (0,2,0)} is a point of PG(2,F3), as is the line {(0,0,0), (1,0,2), (2,0,1)}. (Recall that arithmetic in F3 is modulo 3.) As a line in F3^3 is a one-dimensional subspace, it is generated by any non-zero point on the line, and consists of all scalar multiples of such a point. So we could represent the two lines we just mentioned as <(0,1,0)> and <(1,0,2)>, with angle brackets meaning "the line generated by". Any non-zero point on the line will do to represent the line, but it would be good to have a way of choosing a canonical representative. For the moment, let's say that we will choose the point whose last non-zero coordinate is 1. Here, then, are the points of PG(2,F3):
So the blue points are the points of the form (x,y,1). Every point of the form (x,y,1) represents a line through 0 in F3^3. However, this is not all the lines through 0, for it misses out those lines which are in the plane z = 0. The green points are the points of the form (x,1,0). Each point of the form (x,1,0) represents a line through 0 in the plane z = 0. However, this is still not all, for we are missing the line y = z = 0. This is represented by the red point (1,0,0). And that's it - these are all the "points" of PG(2,F3). (Confirm for yourself that every line through zero in F3^3 is represented by one of these "points".)

Since any scalar multiple of a point in F3^3 represents the same line through 0, and hence the same point of PG(2,F3), mathematicians often write the representatives as (x:y:z), with the colons indicating that it is only the ratios we are interested in. Thus (0:1:0) and (0:2:0) represent the same line in F3^3, and hence the same point of PG(2,F3). (Note that (0:0:0) is not a point of PG(2,F3).)

Okay, so I mentioned that mathematicians define PG(n,Fq) one way, but then think about it another way. So how do mathematicians think about it?

Well, the blue points in PG(2,F3) look just like a copy of AG(2,F3), don't they. Then the green points are called "the line at infinity". Finally, the red point is called "the point at infinity". So mathematicians think of PG(2,F3) as being like AG(2,F3), but with some additional points "at infinity". In particular, although PG(2,F3) was defined in terms of lines through 0 in F3^3, mathematicians actually think of it as consisting of points (not lines).

The idea that the additional points are "at infinity" comes from perspective drawing. Imagine an artist sitting in front of a canvas, looking along the z-axis. Their eye is the origin. The canvas is the plane {(x,y,1)}. To draw what they see, the artist needs to project a line from their eye, through the canvas, until it hits something. Given a large enough canvas, the artist can draw anything that is in front. However, things that are above or to the sides would have to be drawn "at infinity".

One thing I should emphasize is that, contrary to appearance, the line and point at infinity are no different from the other points. Indeed, I could have had my artist looking along the x-axis instead of the z-axis. In that case, the embedded affine plane would have been the points {(1,y,z)}, the line at infinity would have been {(0,1,z)}, and the point at infinity would have been (0,0,1). If we revert to thinking about lines in F3^3 for a moment, it should be clear that the points at infinity are not distinguished in any way from other lines in F3^3. Their appearance of being distinguished has arisen solely from our choice of coordinate system.

Okay, time for some code. In the above, we said that the canonical representative for a line through 0 would be the point with 1 as its last non-zero coordinate - corresponding to looking along the z-axis. That made things easier to explain. However, it turns out that it's usually more convenient to choose the point with 1 as its first non-zero coordinate (ie, looking along the x-axis). Here is the code to list the points of PG(n,Fq):

ptsPG 0 _ = [[1]]
ptsPG n fq = map (0:) (ptsPG (n-1) fq) ++ map (1:) (ptsAG n fq)

For example:

> ptsPG 2 f3
> map reverse it

If we reverse the coordinates, you can see that we have the point at infinity, the line at infinity, and the embedded copy of AG(2,F3), as before.

That's probably enough to take in at one sitting. Next time, we'll look at lines in PG(n,Fq).

Friday, 18 September 2009

Finite geometries, part 2: Symmetries of AG(n,Fq)

Last time, we met the finite affine geometries AG(n,Fq), which are analogous to n-dimensional Euclidean geometry, but defined over the finite fields Fq instead of the reals R. This time I want to talk about the symmetries of AG(n,Fq).

Recall that when we were looking at graphs, we defined a symmetry of a graph as a permutation of the vertices which left the edges (collectively) in the same places. For the moment, we will think of a finite geometry as a configuration of points and lines, and define a symmetry as a permutation of the points which leaves the lines (collectively) in the same places.

Here's AG(2,F3) again:

There's an obvious four-fold rotational symmetry, which moves all the points except the middle point, moves all the lines, but leaves the lines collectively in the same places. Less obvious perhaps is that reflection in the middle line, or translating everything one point to the right (with wraparound), are also symmetries. Remember that the straightness or otherwise of the lines doesn't matter - the only thing we're interested in is which points they are incident with.

So how can we find all the symmetries? Well, when we were looking at graphs, we used depth-first search, trying to create a pairing of source and target vertices, and backtracking whenever the adjacency between the source vertices didn't match the adjacency between the target vertices. Unfortunately, this method doesn't easily generalize to finite geometries. Exercise: Why not?

Instead, we will change the problem into a problem about graphs. Given a collection of points and lines, we can create an "incidence graph". This is the bipartite graph constructed as follows. On the left, we have a vertex for every point. On the right, we have a vertex for every line. We have an edge joining a point on the left to a line on the right just in the case that the point is incident with the line in our geometry.

Here's the code to create the incidence graph of AG(n,Fq):
incidenceGraphAG n fq = G vs es where
    points = ptsAG n fq
    lines = linesAG n fq
    vs = L.sort $ map Left points ++ map Right lines
    es = L.sort [ [Left x, Right b] | b <- lines, x <- closureAG b]

(Recall that a line is represented by two distinct points on it. "closureAG" calculates all the points on the line.)

Now, the trick is, that we can find the symmetries of the finite geometry by finding the symmetries of its incidence graph. For clearly, any symmetry of the finite geometry permutes the points among themselves, permutes the lines among themselves, and preserves incidence - hence it gives rise to a symmetry of the incidence graph. So in principle, we can find the symmetries of the finite geometry by doing the following:

> mapM_ print $ graphAuts $ incidenceGraphAG 2 f3
[[Left [0,0],Left [0,1]],[Left [1,0],Left [2,2],Left [1,1],Left [2,1],Left [1,2],Left [2,0]],[Right [[0,0],[1,0]],Right [[0,1],[1,0]],Right [[0,0],[1,1]],Right [[0,1],[1,1]],Right [[0,0],[1,2]],Right [[0,1],[1,2]]],[Right [[0,2],[1,0]],Right [[0,2],[1,2]],Right [[0,2],[1,1]]],[Right [[1,0],[1,1]],Right [[2,0],[2,1]]]]
[[Left [0,0],Left [0,2],Left [0,1]],[Left [2,0],Left [2,1],Left [2,2]],[Right [[0,0],[1,0]],Right [[0,2],[1,0]],Right [[0,1],[1,0]]],[Right [[0,0],[1,1]],Right [[0,2],[1,1]],Right [[0,1],[1,1]]],[Right [[0,0],[1,2]],Right [[0,2],[1,2]],Right [[0,1],[1,2]]]]

However, there are a couple of minor problems with this:
  1. We only really want to know what happens to the points. What happens to the lines follows from that. So we only want to know about the Left parts, not the Right parts. (Furthermore, if we were to show only the Left part, we then wouldn't need all those Lefts and Rights which are cluttering up the output.)
  2. It is just possible that the incidence graph might have some symmetries which interchange points and lines. This is a very interesting situation, which we'll discuss in due course. But it is a symmetry of the graph which does not correspond to a symmetry of the geometry.
For these reasons, HaskellForMaths provides an "incidenceAuts" function, which works just like graphAuts, except that it knows that the input is an incidence graph, so it avoids wasting time looking for point-line crossover symmetries, and outputs only the permutations of the points.

> mapM_ print $ incidenceAuts $ incidenceGraphAG 2 f3

This is a strong generating set for the symmetries of AG(2,F3). We could now go on to investigate further. We could ask what different types of symmetries there are, using the conjClassReps function, or how many symmetries there are, using the orderSGS function.

For example, among the elements of the SGS, some have obvious interpretations:

[[[1,0],[2,0]],[[1,1],[2,1]],[[1,2],[2,2]] - remember that this means that it swaps [1,0] and [2,0], swaps [1,1] and [2,1], and swaps [1,2] and [2,2]. So it can be thought of as a reflection in the line x = 0, or as a 2* stretch of the x-coordinate. It is given by the matrix [[2,0],[0,1]]. (In F3, 2 = -1, which is why this can be thought of as either a reflection or a stretch.)

[[[1,0],[1,1],[1,2]],[[2,0],[2,2],[2,1]]] is the shear given by the matrix [[1,1],[0,1]].
[Later: You might question whether a shear is a symmetry. We have defined a symmetry informally as a change that leaves things looking the same. But, for example, a shear in R^2 doesn't leave the unit square looking the same. Is it really a symmetry? Well, when we're studying geometries within combinatorics, we're only interested in the incidence between points and lines, and not, for example, distances. From this point of view, a shear is indeed a symmetry, because it preserves incidence. We will later look at what happens when we require symmetries to preserve more than just incidence.]

[[[0,0],[1,0],[0,1]],[[0,2],[2,0],[2,2]],[[1,1],[2,1],[1,2]]] cannot be represented by a 2*2 matrix, since it moves the origin. However, it can be represented by a 3*3 matrix, as follows:
[x']   [2 2 1] [x]
[y'] = [1 0 0] [y]
[ 1]   [0 0 1] [1]
This is the same as saying:
[x'] = [2 2] [x] + [1]
[y']   [1 0] [y]   [0]
So we see that it is a composite of some sort of shear, followed by a translation.

Indeed, we could conjecture that all symmetries of AG(n,Fq) are of this form - a linear transformation followed by a translation. Let's see whether we can confirm this conjecture by counting symmetries.

We have seen that some symmetries can be represented by a 2*2 matrix. In fact, the matrix must be non-singular, otherwise we won't have a permutation. The group of 2*2 non-singular matrices over Fq is the general linear group GL(2,Fq). How many elements does it have? Well, for the first row, we can choose any non-zero vector in Fq^2, of which there are q^2-1. For the second row, we can choose any vector in Fq^2 which is not linearly dependent on the one we already chose, of which there are q^2-q. Generalising to GL(n,Fq), the following code calculates the number of elements of GL(n,Fq):
orderGL n q = product [q^n - q^i | i <- [0..n-1] ]

Then, in addition to these linear transformations, all of which leave the origin fixed, we can do a translation. The number of translations of Fq^n is of course q^n.

A transformation consisting of a translation and a linear transformation is called an affine transformation. The group of all such is called the affine group, Aff(n,Fq). It is straightforward to calculate the number of elements of this group:
orderAff n q = q^n * orderGL n q
(Note that by multiplying the number of translations by the number of linear transformations, I have made an assumption that they are "semi-independent" of one another. This assumption is valid, but at this stage I don't want to spell out exactly what it means.)

Okay, so our conjecture is that all symmetries of AG(n,Fq) are affine transformations. Let's test our conjecture:
> orderSGS $ incidenceAuts $ incidenceGraphAG 2 f3
> orderAff 2 3
> orderSGS $ incidenceAuts $ incidenceGraphAG 2 f4
> orderAff 2 4
Uh-oh - what's going on here? We seem to have twice as many symmetries as we expected.

It took me quite a while to figure this one out actually. What's going on?

Well, remember that when we looked at extension fields, I mentioned that there can be field automorphisms. (For example, in the complex numbers, we have complex conjugation.) When we looked at the finite fields Fq, for q = p^n a prime power, I mentioned the Frobenius automorphism x -> x^p. Okay, well what's going on is that the field automorphisms of Fq give rise to further symmetries of AG(n,Fq). If q = p^n, then the number of such automorphisms is n, so the total number of symmetries of AG(n,Fq) will be n * orderAff n q.

And now that really does account for all symmetries of AG(n,Fq). Apart from the following exceptions:
> orderSGS $ incidenceAuts $ incidenceGraphAG 3 f2
> orderAff 3 2

What's going on here? Well the problem is that AG(n,F2) is degenerate. A line in AG(n,F2) has just two points - and every pair of points forms a line. So in fact, AG(n,F2) is just the complete graph K (2^n), and hence has (2^n)! symmetries.

That's it for now. Next time, finite projective geometries.

Sunday, 13 September 2009

Finite geometries, part 1: AG(n,Fq)

Time to recap a little. Through June and July, we looked at graphs and their symmetries. Then through August, we looked at finite fields. Now I want to look at finite geometries. Why?

Well, it's a continuation of my earlier investigation of symmetry. I chose graphs as the starting point because they're easy to understand. However, graphs are not the only combinatorial structure having symmetry - far from it. Finite geometries are important for several reasons:
1. We'll see that all symmetry groups turn out to be composed out of "atoms of symmetry", called "simple groups". There are several infinite families of finite simple groups - and a few "sporadic" ones that don't fit into any family. One of the infinite families are symmetries of finite projective geometries, which we will look at in due course. (I got the term "atoms of symmetry" from Ronan, Symmetry and the Monster, which is a nice easy read on this stuff.)
2. The symmetry groups of infinite geometries are very important in physics (especially particle physics). So if you like, you can pretend that what we're looking at here will help you understand physics.

This time, I want to look at affine geometries, but what we're aiming towards is projective geometries.

Within mathematics, there are probably many different definitions of geometry. Within combinatorics, we're just interested in the combinatorial structure, so roughly speaking, a geometry is just a collection of points, lines, planes and so on, together with a relation, called "incidence", saying which points are on which lines, and so on. Most of the time it's natural to think of a line as just being a set of points, in which case we can think of the incidence relation as just being the membership relation. However, this isn't always the best way to represent things in the code (or in the maths), as we shall see.

The most familiar geometry is the Euclidean plane, R^2. In this geometry, every pair of lines meets in at most one point. Then of course, there are Euclidean spaces of three or more dimensions. There are also non-Euclidean geometries. For example, spherical geometry is geometry on the surface of a sphere, with the lines being great circles. In this case, every pair of lines meets in exactly two points.

If you know about vector spaces, then you might think that Euclidean geometry is basically about the vector space R^n, with points, lines, planes being the zero-, one-, two-dimensional subspaces, and so on. However, this isn't quite right, because a vector space has a distinguished point, the origin, and every subspace of a vector space contains the origin. In Euclidean geometry there is no such distinguished point - we can have points, lines, planes which don't contain the origin. So we need to take the vector space R^n, and then "forget" the origin. If we do this, we get what is called an affine space. The subspaces of an affine space are the subspaces of the underlying vector space and their translates. The collection of points, lines, planes, and so on in an affine space, together with the incidence relation, is called an affine geometry.

We can construct finite affine geometries simply by replacing R by a finite field Fq in the above. In this way, we obtain the affine geometries AG(n,Fq). So in AG(n,Fq), we will have points, lines, planes and so on, but there will be only a finite number of each. Let's start with the points. Well, that's easy. The points of AG(n,Fq) are just the points Fq^n. The HaskellForMaths code goes like this:

ptsAG 0 fq = [[]]
ptsAG n fq = [x:xs | x <- fq, xs <- ptsAG (n-1) fq]

Actually, I think the stylish way to do this in Haskell would be:

ptsAG n fq = sequence $ replicate n fq

In any case, here's an example:

> :load Math.Combinatorics.FiniteGeometry
> ptsAG 2 f3

In other words, the points of AG(2,F3) form a 3*3 grid.

What about the lines? Well, the points of AG(2,F3) are the vector space F3^2. The lines are just the one-dimensional subspaces of this vector space, and their translates. Here's a picture:

Don't worry about the fact that the lines aren't straight. All we're concerned about is which points are incident with which lines. You might just want to convince yourself that every one-dimensional subspace of F3^2 is shown, as well as every translate.

Hopefully this picture makes the analogy between finite geometries and graphs pretty obvious. Points are analogous to vertices; lines are analogous to edges. The main difference is that unlike edges, lines can have more than two points on them. Indeed, if we consider just the points and lines (and not planes, hyperplanes, etc), then a finite geometry gives rise to a "hypergraph" - a graph, but where the "hyperedges" are allowed to contain more than two points. However, finite geometries have some additional structure, since they can also contain planes, hyperplanes etc.

Anyway, we want to calculate the symmetries of finite geometries, and in order to do that, we're going to need to work out what the lines are. First, let's think about how to represent the lines in code. There are several ways we could do it, including at least the following:
- We could represent a line as a list/set of the points on it. We will occasionally want to list all the points on a line, but as a way to represent a line, it's not very efficient.
- A line in the plane has an equation ax+by=c. We could represent a line by the triple (a,b,c). However, that would only work in two dimensions.
- We could represent a line by any pair of distinct points on the line. This will work in any number of dimensions, and will generalise to planes (represented by three points), and so on.

This last approach is the one we will take. So we would like two functions:
- given a pair of points in AG(n,Fq), return all points on the line that they generate
- given n, Fq, return all lines in AG(n,Fq) (represented as a pair of points)

The first is straightforward:

lineAG [p1,p2] = L.sort [ p1 <+> (c *> (p2 <-> p1)) | c <- fq ] where
    fq = eltsFq undefined

There are a couple of unfamiliar things here. <+>, <->, *> are the HaskellForMaths functions for adding two vectors, subtracting, and multiplying by a scalar. The second line is just a little bit of phantom type trickery, using type inference to magic up the elements of Fq. Note that the points are returned sorted.

Now to find all the lines. The naive way to do this is as follows:
- list all pairs of points in AG(n,Fq)
- but any pair of points on a line generates the same line, so each line will be listed many times
- so keep only those pairs of points which are the first two points on the line

Here's the code:

linesAG1 n fq = [ [x,y] | [x,y] <- combinationsOf 2 (ptsAG n fq),
                          [x,y] == take 2 (lineAG [x,y]) ]
(This is not a very efficient way to find the lines, but we don't yet have the background to do it more efficiently.)

For example:

> mapM_ print $ linesAG1 2 f3

Or we can list all the points on all the lines:

> mapM_ (print . lineAG) $ linesAG1 2 f3

You can verify for yourself that these are the lines shown in the picture above.

That'll do for now. Next time we'll look at how to work out the symmetries of AG(n,Fq).

Wednesday, 2 September 2009

Finite fields, part 2

Okay, so last time, we saw how to extend Q, the field of rational numbers, by adjoining a new element which is the zero of a polynomial in Q[X]. For example, we can adjoin sqrt 2, a zero of X^2-2, to obtain the field Q(sqrt2).

The time before that, we saw that for each prime p, there is a finite field Fp, consisting of the set {0,1,...,p-1}, with the arithmetic operations carried out modulo p.

This week we're going to bring these two ideas together, and look at algebraic extensions of Fp.

Algebraic extensions of Fp work just the same way as algebraic extensions of Q. First, find an irreducible polynomial over the base field K - that means, a polynomial f(x) in K[x], which can't be expressed as f(x) = g(x) * h(x), for g(x), h(x) in K[x] of lower degree. For example, whenever d is an element of K without a square root in K, then x^2-d is irreducible. Then, form the quotient ring K[x]/ - that means, work in K[x], but use the division with remainder algorithm to set f=0, which has the effect of making x act like a zero of f. For f(x)=x^2-d, this means x ends up acting like sqrt d.

For example, in F3, and also in F5, there is no square root of 2. So we can form the fields F3(sqrt2) or F5(sqrt2). We need to be a bit careful about terminology. 2 in F3, 2 in F5, and 2 in Q, are all different things - consequently sqrt2 in F3, in F5, and in Q are all different things. We mustn't make the mistake of thinking they're the same thing.

We saw last time that Q(sqrt2) consists of { a + b sqrt2 | a, b in Q }. Similarly, F5(sqrt2) consists of { a + b sqrt2 | a, b in F5 }. In particular, it follows that F5(sqrt2) has 25 elements. In general, a simple algebraic extension of Fp will have p^n elements, where n is the degree of the irreducible polynomial.

Over Q, we were able to form the fields Q(sqrt2), Q(sqrt3), Q(i), and so on. Somewhat surprisingly, over Fp, it turns out that any algebraic extension of degree n is isomorphic to any other. For example, F5(sqrt2) is isomorphic to F5(sqrt3). For this reason, we typically just talk about F25 (meaning, the field with 25 elements) - or in general, Fq, where q = p^n is a prime power.

However, in order to do arithmetic in F25 or in Fq, we clearly need to have some particular irreducible polynomial in mind. Otherwise, we won't know whether to reduce x^2 to 2, or to 3, for example. So for each p, n, we need to agree on an irreducible polynomial of degree n over Fp. There doesn't appear to be any natural way to choose, so an artificial way to choose has been devised, called the Conway polynomials.

So, using a little phantom type trickery as before, here are the first few finite fields of prime power order, as defined in the HaskellForMaths library:

data ConwayF4
instance PolynomialAsType F2 ConwayF4 where pvalue _ = convert $ x^2+x+1
type F4 = ExtensionField F2 ConwayF4
f4 = map Ext (polys 2 f2) :: [F4]
a4 = embed x :: F4

data ConwayF8
instance PolynomialAsType F2 ConwayF8 where pvalue _ = convert $ x^3+x+1
type F8 = ExtensionField F2 ConwayF8
f8 = map Ext (polys 3 f2) :: [F8]
a8 = embed x :: F8

data ConwayF9
instance PolynomialAsType F3 ConwayF9 where pvalue _ = convert $ x^2+2*x+2
type F9 = ExtensionField F3 ConwayF9
f9 = map Ext (polys 2 f3) :: [F9]
a9 = embed x :: F9

data ConwayF16
instance PolynomialAsType F2 ConwayF16 where pvalue _ = convert $ x^4+x+1
type F16 = ExtensionField F2 ConwayF16
f16 = map Ext (polys 4 f2) :: [F16]
a16 = embed x :: F16

data ConwayF25
instance PolynomialAsType F5 ConwayF25 where pvalue _ = convert $ x^2+4*x+2
type F25 = ExtensionField F5 ConwayF25
f25 = map Ext (polys 2 f5) :: [F25]
a25 = embed x :: F25

As we did with the fields Fp, we provide the functions f4, f8, f9, etc, which just return a list of the elements of the corresponding fields. a4, a8, a9 etc return the element that has been adjoined to the underlying field to make the extension field.

For example:

> f8
[0,a^2,a,a+a^2,1,1+a^2,1+a,1+a+a^2] :: [F8]
> a8 ^ 3
1+a :: F8

We can partially verify the claim that F25 = F5(sqrt2) = F5(sqrt3), by exhibiting square roots of 2 and 3 in F25:

> (2+a25)^2
2 :: F25
> (1+3*a25)^2
3 :: F25

Last time, I mentioned that extension fields can have automorphisms, without really spelling out what that means. Well, if L is an extension of K, then f is an automorphism of the extension if for a,b in L, c in K, f(a+b) = f(a)+f(b), f(a*b) = f(a)*f(b), etc, and f(c*a)= c*f(a). In other words, f(L) looks the same as L, viewed from K.

The finite fields Fq (q = p^n) have an automorphism called the Frobenius automorphism, defined by f(x)=x^p. For example:

> f9
> map frobeniusAut f9

Finally, a word about efficiency. The HaskellForMaths implementation of fields Fq is very natural from a mathematical point of view, but it isn't very efficient.

Firstly, we are working with polynomials with coefficients in Fp. To multiply two degree n polynomials together, we have to do O(n^2) additions and multiplications of coefficients. If the coefficients are in Fp, each one of these involves a (`mod` p) operation. It would be better to do the polynomial arithmetic over the integers, and only do the `mod` p at the end. We would then only have to do O(n) (`mod` p) operations.

Secondly, we could consider implementing the fields Fq using Zech logarithms instead.

Anyway, it doesn't matter too much for the moment, as I only want to use the finite fields to construct combinatorial structures. But sometime I might try to write a more efficient implementation.

Next time: finite geometries.

Thursday, 27 August 2009

Extension fields

[New version HaskellForMaths 0.1.7 uploaded here.]

Last week, we looked at the finite fields of prime order, Fp. A field, remember, is a set in which you can do addition, subtraction, multiplication and division. For a given prime p, the field Fp consists of the set {0, 1, ..., p-1}, with arithmetic operations done modulo p. Why does p have to be a prime? We can do addition, subtraction, and multiplication modulo n for any n, prime or not. However, for division, we require p to be prime. (Why?)

You might think, therefore, that the fields Fp of prime order are the only finite fields. However, you would be wrong. We shall see that there are in fact fields Fq of order q (ie, with q elements) for every prime power q = p^n. However, before we can do that we need to understand about algebraic extensions of a field.

It can happen that a field is contained within another field. For example, we have Q
  • At step 0, start with Q
  • At step 1, add a
  • At step n, add all elements that can be obtained by combining two elements from step n-1 using one of the arithmetic operators.
So for example, at step 2, we will have 1+a, 2*a, a*a, 1/a, etc. Anything which appears in the list at step n for some n is in Q(a).

The field Q(a), obtained by adjoining a single element, is called a simple extension of Q. (We could then go on to adjoin another element b, to get Q(a,b), and so on.) Among simple extensions, there are two ways it can go.

Suppose that a is a zero of some polynomial with coefficients in Q - for example x^2-2. In that case, we will construct the whole of Q(a) after only a finite number of steps. For example, in the case where a = sqrt 2, every element of Q(sqrt 2) can be expressed as c + d sqrt 2, for some c, d in Q. For example:
(sqrt 2)^2 = 2
1/(sqrt 2) = (sqrt 2) / 2
1/(1 + sqrt 2) = (1 - sqrt 2) / (1 - 2) = -1 + sqrt 2
This is called an algebraic extension.

Alternatively, if a is not the zero of a polynomial over Q (for example: e, pi), then the construction of Q(a) in steps will never finish. This is called a transcendental extension.

Algebraic extensions of Q are fascinating things. Initially, the main motivation for studying them came from number theory. Some other time, I'd like to take a closer look at them. However, for the moment, I just want to show how to construct them in Haskell, as what we're really aiming for is algebraic extensions of the finite fields Fp.

Let's suppose that we're trying to construct Q(sqrt2). The polynomial we're interested in is x^2-2. Then the basic idea is:
  1. Form the polynomial ring Q[x], consisting of polynomials in x with coefficients in Q
  2. Represent elements of Q(a) as polynomials in Q[x]
  3. Do addition, subtraction, multiplication in Q(a) using the underlying operations in Q[x]
  4. After any arithmetic operation, replace the result by its remainder on division by x^2-2.
Step 4 is the key step. Suppose that we end up with a polynomial f. We use division with remainder to write f = q(x^2-2)+r, and we replace f by r. In effect, this means that we're setting x^2-2 = 0, which is the same as setting x = sqrt 2. If we do this consistently, then the x ends up acting as sqrt2, and we end up with Q(sqrt2).

To construct Q(sqrt3), Q(i), etc, just replace x^2-2 by the appropriate polynomial.

Okay, rather belatedly, time for some code. First, we need to define a type for (univariate) polynomials:

newtype UPoly a = UP [a] deriving (Eq,Ord)

UP [c0,c1,] is to be interpreted as the polynomial c0 + c1 x + ... + cn x^n.

Exercise: Define a Num instance for UPoly a.

If we then define

x = UP [0,1] :: UPoly Integer

together with a suitable Show instance, then we can do things like the following:

> (1+x)^3

Exercise: Write quotRemUP, to perform division with remainder in UPoly k, on the assumption that k is a field (that is, a Fractional instance).

So we now have a type, UPoly Q, representing Q[x]. Next, we want to wrap this in another type to represent extension fields Q(a). Rather than have to do this over again each time for Q(sqrt2), Q(sqrt3), Q(i), and so on, we use a little bit of phantom type trickery again. Last time, we used phantom types to represent integers; this time, we're going to use phantom types to represent the polynomials that define the fields (x^2-2, x^2-3, x^2+1, etc).

class PolynomialAsType k poly where
pvalue :: (k,poly) -> UPoly k

data ExtensionField k poly = Ext (UPoly k) deriving (Eq,Ord)

Here, k represents the field we are extending - Q, to begin with - and if a is the element that we want to adjoin, then poly represents the polynomial over Q of which a is a zero.

From here, it's a short step to define some extension fields:

data Sqrt a = Sqrt a

-- n should be square-free
instance IntegerAsType n => PolynomialAsType Q (Sqrt n) where
pvalue _ = convert $ x^2 - fromInteger (value (undefined :: n))

type QSqrt2 = ExtensionField Q (Sqrt T2)
sqrt2 = embed x :: QSqrt2

type QSqrt3 = ExtensionField Q (Sqrt T3)
sqrt3 = embed x :: QSqrt3

And now we can do arithmetic in Q(sqrt2), Q(sqrt3), etc. For example:

> :set +t
> (1+sqrt2)^2
3+2a :: QSqrt2
> (1+sqrt3)^2
4+2a :: QSqrt3

As you see, the show function of ExtensionField k poly shows the adjoined element as "a", regardless of which field we're working in. With a little more type hackery, we could have made that a parameter to the type too, so that it would show "sqrt2", "sqrt3", "i", depending which field we were in. However, that would have obscured the code even more. In practice, when we use this code we're only going to be working over one field at a time, so it's fine as it is.

Another limitation of this code is that it's going to be a bit unwieldy to construct Q(a,b) as the type ExtensionField (ExtensionField k poly1) poly2. Luckily, this doesn't matter, as any algebraic extension Q(a,b) is equal to Q(c) for some c.

Exercise: Show that Q(sqrt2, sqrt3) = Q(sqrt2 + sqrt3), and find the polynomial of which sqrt2 + sqrt3 is a zero.

I apologise that I've only really explained the bare bones of how this works. Hopefully you can fill in the gaps. (They're all in the HaskellForMaths source - see link at top of page.) This post has already taken me far too long to write, but there is just one other thing I ought to mention.

Field extensions can have automorphisms (symmetries). Recall that a symmetry is a change that leaves something looking the same. In the case of Q(sqrt2), there is a non-trivial symmetry that sends c + d sqrt 2 to c - d sqrt2 (conjugation). Field automorphisms have a very important place in the history of mathematics: Galois invented group theory in order to study these symmetries, during his investigations into the unsolvability (in general) of the quintic.

Monday, 17 August 2009

Finite fields, part 1

Okay, so you could think of this as the beginning of chapter two of the imaginary book that I'm writing in this blog. Chapter one was about graphs, groups and symmetry. In chapter two I want to talk about some other highly symmetric combinatorial objects: finite geometries and (probably) designs. But first, I need to talk about finite fields.

A field is roughly a set where you can add, subtract, multiply and divide. You're probably familiar with the following fields:
  • Q, the rational numbers
  • R, the real numbers
  • C, the complex numbers
And also with the following non-fields:
  • N, the natural numbers. Not a field, because it doesn't contain additive inverses (the negative numbers), so you can't always subtract.
  • Z, the integers. Not a field, because it doesn't contain multiplicative inverses, so you can't always divide.
I can't resist mentioning a little curiosity. To a number theorist / discrete mathematician like myself, the most interesting of the above are Q and Z. I like to think that this is why they were given the most interesting letters. Indeed, the interestingness of the mathematical objects appears to be correlated with the Scrabble value of the letters: in Scrabble, Q and Z are worth 10, indicating highly interesting; R and N are worth 1, indicating a bit boring; C is worth 3 - moderately interesting. Going further, H, the quaternions, is worth 4 - slightly more interesting than C. However, O, the octonions, is only worth 1, which looks like an anomaly.

Anyway, back to fields. Before we go any further, here's the HaskellForMaths version of Q, the rationals:
import Data.Ratio

newtype Q = Q Rational deriving (Eq,Ord,Num,Fractional)

instance Show Q where
show (Q x) | b == 1 = show a
| otherwise = show a ++ "/" ++ show b
where a = numerator x
b = denominator x
Silly, I know, but I just got a bit annoyed with those percentage signs all over the place when using Haskell's built-in rationals.

Anyway, in addition to the fields Q, R, and C, there are many other types of field. The ones I want to look at here are the finite fields of prime order. These are the fields you get if you do arithmetic modulo p, where p is a prime. For each different prime p, we have a field Fp, consisting of the set {0,1, ... p-1}, with arithmetic performed modulo p. For example, in the field F5, we have 3*3 = 4; while in the field F7, we have 3*3 = 2.

Okay, so how to represent these fields in Haskell?

Well, in an earlier version of the HaskellForMaths code, I tried this:

data Fp = F Integer Integer

The idea is that the first integer is p, the prime we are working over, and the second is the value. For example, F 5 3 would represent the value 3 in the field F5.

Next we either derive or define instances of Eq, Ord, Show, Num, Fractional. The Num instance looks something like this:

instance Num Fp where
F p x + F q y | p == q = F p $ (x+y) `mod` p
F p x * F q y | p == q = F p $ (x*y) `mod` p
fromInteger _ = error "Fp.fromInteger: not well defined"

The drawbacks of this approach are probably pretty obvious:
  • Whenever I have two Fp values, F p x and F q y, I first have to check that p = q. This is a run-time check of something that would be better handled at compile time by the type system
  • When defining numerical types in Haskell, the fromInteger function gives us implicit type conversion from integers to our type, so that we can just write code like 3 * 3, and have the type system convert them into the correct type. However, in this case, we can't give a sensible definition of fromInteger, because we don't know which prime p the user wants to work over. This means that everything is going to have to be written longhand as F 5 3 * F 5 3, and so on.
  • It feels like a waste of memory to be carrying that first Integer p around all the time, when if I'm using the code correctly, all the ps in a given calculation will be the same.
For these reasons, I decided to try another approach. The aim is to make F2, F3, F5, etc be different types, so that adding 3 :: F5 to 3 :: F7 is a type error, which can be spotted at compile time, rather than a runtime error.

It's fairly simple really.

First, define a whole bunch of phantom types representing the primes:

data T2
data T3
data T5

Now define a mechanism for retrieving the values from the types:
class IntegerAsType a where
value :: a -> Integer

instance IntegerAsType T2 where value _ = 2
instance IntegerAsType T3 where value _ = 3
instance IntegerAsType T5 where value _ = 5

This enables us to do the following:

> value (undefined :: T2)
> value (undefined :: T3)

Now, we use the phantom types to parameterise a finite field data type:

newtype Fp n = Fp Integer

Here, Fp (the one on the left of the equals sign) is a type constructor which takes a type parameter n. This type parameter will be one of the phantom types we just defined. For example:

type F2 = Fp T2
type F3 = Fp T3
type F5 = Fp T5

As before, we need to derive or define Eq, Ord, Show, Num, and Fractional instances. The Num instance looks like this:

instance IntegerAsType n => Num (Fp n) where
Fp x + Fp y = Fp $ (x+y) `mod` p where p = value (undefined :: n)
Fp x * Fp y = Fp $ (x*y) `mod` p where p = value (undefined :: n)
fromInteger m = Fp $ m `mod` p where p = value (undefined :: n)

This is all fairly standard phantom type trickery, but it now enables us to do things like the following:

> :load Math.Algebra.Field.Base
> 3*3 :: F5
> 3*3 :: F7
> (3 :: F5) * (3 :: F7)
#type error#

We still have to tell Haskell which prime p we're working over, but now we do it via a type annotation, rather than an argument to the constructor. This means that Haskell can catch errors where we mix values from different fields at compile time, rather than run time.

The module Math.Algebra.Field.Base provides types for the finite fields F2, F3, F5, ..., F97. The module also provides functions to list the elements of each field:

f2 = map fromInteger [0..1] :: [F2]
f3 = map fromInteger [0..2] :: [F3]
f5 = map fromInteger [0..4] :: [F5]

And that's about it.

However, we must recognise, in all honesty, that there are also drawbacks to this implementation of the fields Fp.

First, whenever we want to work over a new prime p, we have to define a new type. For example, if we want to work over F101, we are going to have to define a new type for it.

(For the avoidance of doubt: No, type level Peano arithmetic is not what I'm after.)

Second, I can't decide at run time what field to work over. For certain applications it would be very useful to choose my p at run time. I could do that with the first implementation, but I can't do it with the second.

What I really want then, is for Fp and Fq to be different types when p /= q, but to be able to decide at run-time which p to work over. For example, I might have calculated the p I want to work over as the result of a previous calculation. This is impossible in Haskell.

However, there is a technology called "dependent types", in which what I want is possible. Dependent types means roughly that types can depend upon values. So in a Haskell-like language that supported dependent types, I might be able to write code like the following (not valid Haskell):

> 3 * 3 :: F 5

Here, the value 5 (not the type T5) is being passed as a type parameter to the type constructor F.

For a while, I thought this was the holy grail. I even considered defecting from Haskell. I did look into languages that support dependent types, such as Coq. However, I concluded that, sadly, this is not the answer. The problem is that - I think - if you have types dependent on values, then you lose the distinction between compile time and run time, which means - I think - that you can't compile the language any more.

Or does someone else know better?

Wednesday, 5 August 2009

Where we've been, and where we're going

It feels like about time to take a step back and survey where we've been and where we're going in this blog.

What I'm trying to do in this blog is a kind of guided tour of my HaskellForMaths library. In turn, the library itself, although it contains efficient implementations of fundamental algorithms in computer algebra, is really an educational tool. I wrote it to help me understand maths I wanted to learn about, and especially, to help develop my intuitions about the objects of study. My hope is that by retracing my steps in this blog, I can help others to do the same.

Up to now, we've been looking at graphs and their symmetries, considered as permutation groups. Group theory is sometimes described as the study of symmetry, and I hope that by introducing groups through symmetries of graphs, I've made it clear why.

We've really only looked at quite a small amount of code. The main functions we've seen (forgetting a few stepping stones along the way) are:
  • graph (vs,es) - for constructing a graph with given vertices and edges
  • graphAuts g - returns a strong generating set for the group of symmetries / automorphisms of the graph
  • orderSGS sgs - given a strong generating set, return the order of the group - for example, the total number of symmetries of the graph
  • conjClassReps - given a generating set for a group, return representatives and sizes for the conjugacy classes - the classes of elements which are "the same, just viewed from a different angle"
  • p [[1,2,3],[4,5]] - given a list of cycles, return the corresponding permutation
  • g*h, 1, g^-1 - multiplication, identity and inverses for group elements
  • v .^ g - the action of a permutation on a vertex
  • e -^ g - the action of a permutation on an edge
  • sgs gs - return a strong generating set, given a set of generators
This is the core of the HaskellForMaths code for graphs, graph automorphisms, and permutation groups. Incidentally, I think this code really shows off Haskell to advantage: code to do the same things would be much more complicated in C++, say. I'm especially pleased with the ability to define graphs and groups over arbitrary types, which enables very natural constructions of various classes of graphs and groups. (In this respect, though not in others, I think the library probably beats even dedicated packages such as GAP and MAGMA.)

However, the graph and permutation group code represents, I would say, only about one sixth of what's in the HaskellForMaths library. (Not to mention that I have quite a lot more code in various stages of development back in the lab.) So what else is there still to look at?

Well, what I want to do next is stick with permutation groups a little longer, but widen the set of combinatorial objects that we consider, to include finite geometries, designs, and perhaps other incidence structures.

One of my aims is to take a look at a few of the sporadic finite simple groups. Finite simple groups are the "atoms of symmetry". Most of them fall in one of several infinite families (the cyclic groups of prime order, the alternating groups, and the finite groups of Lie type, aka Chevalley groups). However, it turns out that there are also 26 "sporadic" finite simple groups, which don't belong to any of these families. The HaskellForMaths library contains code for investigating both the infinite families, and some of the smaller sporadic groups.

Then there are root systems and Coxeter groups, including string rewriting and the Knuth-Bendix algorithm. Root systems have some connection to Lie algebras, and are therefore important in physics, though I don't claim to fully understand all that stuff.

Another major part of the library, which I'll get round to eventually, is commutative algebra and Groebner bases. Groebner bases provide a way of calculating with ideals in polynomial rings, which is equivalent to doing algebraic geometry.

Finally, there's some code for working in non-commutative algebra. As an example, there's some code which uses Temperley-Lieb and Hecke algebras to calculate polynomial invariants of knots (the Jones polynomial and the HOMFLY polynomial). In due course, I'm hoping to expand this part of the library to do group algebras, representation theory, and other stuff.

In case I've managed to interest anyone, I should probably mention a few of the books that I used for inspiration:
  • Ronan, Symmetry and the Monster
  • Godsil and Royle, Algebraic Graph Theory
  • Cameron, Combinatorics
  • Seress, Permutation Group Algorithms
  • Holt, Handbook of Computational Group Theory
Next time - back to the maths.

Saturday, 1 August 2009

How to count the number of positions of Rubik's cube

Previously on this blog, we have been looking at the symmetries of graphs, which we defined as permutations of the vertices which leave the edges (collectively) in the same places. We think of permutations as active rearrangements of the vertices, rather than as static sequences. For example, we think of the static permutation [2,1,4,3] as the active rearrangement [[1,2],[3,4]], which swaps the 1 and 2 positions, and swaps the 3 and 4 positions. Given two permutations g and h, the active viewpoint enables us to ask what happens when you do g then h, which we think of as multiplication g*h; and how to undo g, which we think of as an inverse g^-1.

The symmetries of a graph form a group. This means, that if g and h are symmetries, then so are g*h, and g^-1. So a group is a set of permutations which is closed under multiplication and inverses.

We can use permutation groups to study all sorts of things besides graph symmetries. As an example, this week I'm going to look at Rubik's cube.

First, we need to work out a way to represent cube moves as permutations. That's easy - we just number the little squares as follows:

Then a clockwise rotation of the front face (the blue face, with an F in the centre), can be represented as the following permutation:

f = [[1,3,9,7],[2,6,8,4],[17,41,33,29],[18,44,32,26],[19,47,31,23]]

This says, that f sends the 1 square to the 3 position, 3 to 9, 9 to 7, 7 to 1 (the corners of the blue face); 2 to 6, 6 to 8, 8 to 4, 4 to 2 (the edges of the blue face); together with three other cycles (the corners and edges adjacent to the blue face).

It's then an easy matter to write down permutations corresponding to the clockwise rotations of the other faces:

b = p [[51,53,59,57],[52,56,58,54],[11,27,39,43],[12,24,38,46],[13,21,37,49]]
l = p [[21,23,29,27],[22,26,28,24],[ 1,31,59,11],[ 4,34,56,14],[ 7,37,53,17]]
r = p [[41,43,49,47],[42,46,48,44],[ 3,13,57,33],[ 6,16,54,36],[ 9,19,51,39]]
u = p [[11,13,19,17],[12,16,18,14],[ 1,21,51,41],[ 2,22,52,42],[ 3,23,53,43]]
d = p [[31,33,39,37],[32,36,38,34],[ 7,47,57,27],[ 8,48,58,28],[ 9,49,59,29]]

Note that I didn't bother to number the centre square in each face, or write down any permutations that move a middle slice. For example, we could have written down:

x = [[4,44,54,24],[5,45,55,25],[6,46,56,26]]

However, doing x is the same as doing u^-1 * d^-1 and then turning the whole cube round. What we're interested in is the possible positions of the 54 squares relative to one another, so we don't really want to count rotations of the whole cube. For this reason, we leave out rotations of the middle slices.

Okay, so we have written down six permutations, corresponding to clockwise rotations of the six faces. Hopefully it's clear that these six elements generate the Rubik's cube group. So we are now in a position to perform calculations within the Rubik's cube group.

Within a group, the order of an element g is defined as the least n>0 such that g^n = 1. In other words, the order of g is the number of times you have to do g to get back where you started. (Recall that the order of a group is the number of elements it contains. The order of an element is equal to the order of the group generated by the element, since this is {g, g^2, ... , g^n-1, g^n = 1}, having n elements.)

For permutations written in cycle notation, it is easy to see the order by inspection. For example:
  • The order of g = [[1,2]] is 2 - you have to do g twice to get back where you started.
  • The order of h = [[1,2,3]] is 3 - you have to do h three times to get back where you started.
  • The order of k = [[1,2,3],[4,5]] is 6. At k^2, we have [[1,3,2]] - the 4 and 5 are in the correct positions, but the 1, 2, 3 are not. At k^3 we have [[4,5]] - now the 1, 2, 3 are in the correct positions, but the 4 and 5 are not. Only at k^6 does everything get back to the starting position.
Hopefully it's clear that the order of an element will always be the least common multiple of the lengths of the cycles:

orderElt g = foldl lcm 1 $ map length $ toCycles g

For example, in the Rubik's cube group we have:

> f*l
> orderElt it

That is, if you do a front turn followed by a left turn, you will have to repeat 105 times before you get back to the starting position.

When we were looking at graph symmetries, we found that it was particularly useful to have our generators in the form of a strong generating set, since it was then straightforward, for example, to calculate the order of the group.

Rubik's cube isn't a graph. Luckily, the HaskellForMaths library has a function to calculate a strong generating set from any set of generators:

> mapM_ print $ sgs [f,b,l,r,u,d]

Okay, so strong generating sets aren't all that interesting in themselves - although the last few entries in this one are a little bit interesting, as they show that there are moves which just rotate a pair of corners, or a pair of edges.

Remember though, that SGS allow us to easily calculate the order of a group. In this case:

> orderSGS $ sgs [f,b,l,r,u,d]

That's about 4 * 10^19 different positions.

The sgs function, to calculate a strong generating set, uses the Schreier-Sims algorithm. I'll probably explain how it works some time, but not just yet.
Newer Posts Older Posts Home


Blog Archive

About Me