Given a graph, the distance between vertices x and y is defined as the length of the shortest path between them - that is, the number of edges on the path. For example, in the cube shown below, the distance between the 0 and 7 vertices is 3. There are several routes, but each route involves passing along at least 3 edges. In HaskellForMaths, we can use the "distance" function to find this out:

> :load Math.Combinatorics.GraphAuts

> distance (q 3) 0 7

3

In the picture, we see that the vertices fall into four levels, depending on whether their distance from 0 is 0, 1, 2, or 3. Thus, distance from a given vertex can be used to partition the vertices in a graph. This is called the distance partition:

> distancePartition (q 3) 0

[[0],[1,2,4],[3,5,6],[7]]

(The implementation of distancePartition is a variant on the closure algorithm that we have seen a couple of times before, and is left as an exercise.)

Okay, so how do distance partitions help us with finding graph symmetries?

Well, recall that the way graphAuts2 works is, using depth-first search, try to find a symmetry that sends 0 to 1, another that sends 0 to 2, another that sends 0 to 3, and so on, then looking only at those that fix 0, another that sends 1 to 2, another that sends 1 to 3, and so on, then looking only at those that fix 0 and 1, another that sends 2 to 3, another that sends 2 to 4, and so on, and so on.

Well, the first step is to realise that graph symmetries must preserve distance. That is, if g is a symmetry, then we must have: distance x y = distance (x .^ g) (y .^ g). So, suppose that we are looking for a graph symmetry that sends 0 to 1, and we're wondering where to send x. Well, we need only consider those y such that distance 0 x = distance 1 y.

In terms of distance partitions, this means that, once we have decided to map 0 to 1, then we must map each cell in the distance partition of 0 to the corresponding cell in the distance partition of 1.

> distancePartition (q 3) 0

[[0],[1,2,4],[3,5,6],[7]]

> distancePartition (q 3) 1

[[1],[0,3,5],[2,4,7],[6]]

Specifically, we must map {1,2,4} to {0,3,5} (though not necessarily in that order), and {3,5,6} to {2,4,7}, and 7 to 6.

This is already going to cut down our search space considerably, perhaps especially on graphs with many vertices. However, we can go further.

Suppose now that we are looking for symmetries that send 0 to 0 and 1 to 1. Because they send 0 to 0, they must preserve the cells in the distance partition from 0. So they must send {1,2,4} to {1,2,4} (but not necessarily in that order), {3,5,6} to {3,5,6}, and 7 to 7. Also, because they send 1 to 1, they must preserve the cells in the distance partition from 1. So they must send {0,3,5} to {0,3,5}, {2,4,7} to {2,4,7}, and 6 to 6.

I think of this a being a bit like triangulation. Suppose we're wondering where to send 2 to. Well, looking at the distance partition from 0, we see that it must go to one of {1,2,4}. On the other hand, looking at the distance partition from 1, we see that it must go to one of {2,4,7}. So that actually narrows it down to {2,4}. (Yes, I know, in this case we could already see that, but you get the idea.) If we take another fix from a third point, that will narrow it down even further, and so on.

So the idea of our new graphAuts3 function will be as follows.

- We will still start by trying to send 0 to 1, 2, 3, 4, 5, 6, 7, and finally 0.
- If we're trying to send 0 to x, then when we come to wonder where to send 1, we'll consider only those y with distance x y = distance 0 1. In terms of the distance partition, this means that if 1 falls into cell d of the distance partition from 0, then y must fall into cell d of the distance partition from x.
- As we successively decide where to send 1, 2, etc, we will "refine" the cells of the partition by triangulation with the new point.

refine p1 p2 = concat [ [c1 `intersect` c2 | c2 <- p2] | c1 <- p1]

For example:

> distancePartition (q 3) 0 `refine` distancePartition (q 3) 1

[[],[0],[],[],[1],[],[2,4],[],[],[3,5],[],[6],[],[],[7],[]]

We get quite a few empty lists in the refinement - they're a necessary evil, but we will remove them as we go along.

Okay, so here's our graphAuts3 function:

graphAuts3 g@(G vs es) = graphAuts' [] [vs] where

graphAuts' us ((x:ys):pt) =

let px = refine (ys : pt) (dps M.! x)

p y = refine ((x : L.delete y ys) : pt) (dps M.! y)

uus = zip us us

p' = L.sort $ filter (not . null) $ px

in concat [take 1 $ dfs ((x,y):uus) px (p y) | y <- ys]

++ graphAuts' (x:us) p'

graphAuts' us ([]:pt) = graphAuts' us pt

graphAuts' _ [] = []

dfs xys p1 p2

| map length p1 /= map length p2 = []

| otherwise =

let p1' = filter (not . null) p1

p2' = filter (not . null) p2

in if all isSingleton p1'

then let xys' = xys ++ zip (concat p1') (concat p2')

in if isCompatible xys' then [fromPairs' xys'] else []

else let (x:xs):p1'' = p1'

ys:p2'' = p2'

in concat [dfs ((x,y):xys)

(refine (xs : p1'') (dps M.! x))

(refine ((L.delete y ys):p2'') (dps M.! y))

| y <- ys]

isCompatible xys = and [([x,x'] `S.member` es') == (L.sort [y,y'] `S.member` es') | (x,y) <- xys, (x',y') <- xys, x < x']

dps = M.fromList [(v, distancePartition g v) | v <- vs]

es' = S.fromList es

It looks complicated (and perhaps I could tidy it up a little), but what it's doing is really fairly straightforward. Basically, we're still doing depth first search, in levels, as before. However, we're now maintaining two partitions as we go, the source partition p1 and the target partition p2, and we're constrained to map cells in the source partition to the corresponding cells in the target partition. If we ever find that the "shape" (map length) of the source and target partitions are different, then we know we have gone wrong and need to backtrack. Having satisfied ourselves that the shapes are the same, we can remove those pesky empty lists. Finally, as soon as we find that every cell is a singleton, then we can can shortcut any further search - although we still need to check that the implied mapping is a valid symmetry, to avoid false positives.

Okay, so how about a brief demonstration of its power. Time for a confession - on most of the smallish graphs that we've considered so far, graphAuts3 is actually slightly slower than graphAuts2. However, it's not too difficult to find larger graphs where it wins out.

The Kneser graphs are defined as follows:

kneser n k | 2*k <= n = graph (vs,es) where

vs = combinationsOf k [1..n]

es = [ [v1,v2] | [v1,v2] <- combinationsOf 2 vs, disjoint v1 v2]

So the Kneser graph has as vertices the k-subsets of [1..n], with edges joining subsets which are disjoint.

We've already met one of them - the Petersen graph is kneser 5 2. Kneser 7 3 is shown below - it has 35 vertices and 70 edges:

So here's an experiment you can try at home. Compare the running times of graphAuts2 and graphAuts3 on kneser 7 3. On my laptop, graphAuts3 manages to find a transversal generating set of 49 symmetries in less than a second. On the other hand, graphAuts2 had only managed to find 1 symmetry after 10 minutes, at which point I gave up.

Incidentally:

> orderTGS $ graphAuts3 $ kneser 7 3

5040

... which is 7 factorial. That's because the action of S 7 on [1..7] induces an action on kneser 7 3, and it turns out that every symmetry of kneser 7 3 arises from an underlying permutation of [1..7].

Anyway, with graphAuts3, and the orderTGS function from last time (for calculating the number of symmetries, given a transversal generating sequence), we are beginning to have the tools to investigate some larger graphs. However, there is another improvement we can make, which will lead us to strong generating sets and the Schreier-Sims algorithm - next time.

## No comments:

## Post a Comment