Friday, 28 May 2010

String rewriting and Knuth-Bendix completion

Previously in this blog we have been looking at symmetry groups of combinatorial structures. We have represented these symmetries concretely as permutations - for example, symmetries of graphs as permutations of their vertices. However, mathematicians tend to think about groups more abstractly.

Consider the symmetry group of the square (the cyclic graph C4). It can be generated by two permutations:

> :load Math.Algebra.Group.PermutationGroup
> let a = p [[1,2,3,4]]
> let b = p [[1,2],[3,4]]




We can list various relations that are satisfied by these generators:
a^4 = 1
b^2 = 1
a^3 b = b a

Of course, there are other relations that hold between the generators. However, the relations above are in fact sufficient to uniquely identify the group (up to isomorphism).

Since a and b generate the group, any element in the group can be expressed as a product of a's and b's (and also their inverses, but we'll ignore that for now). However, there are of course an infinite number of such expressions, but only a finite number of group elements, so many of these expressions must represent the same element. For example, since b^2=1, then abba represents the same element as aa.

Given two expressions, it would obviously be helpful to have a method for telling whether they represent the same group element. What we need is a string rewriting system. We can think of expressions in the generators as words in the symbols 'a' and 'b'. And we can reinterpret the relations above as rewrite rules:
"aaaa" -> ""
"bb" -> ""
"aaab" -> "ba"

Each of these rules consists of a left hand side and a right hand side. Given any word in the generator symbols, if we find the left hand side anywhere in the word, we can replace it by the right hand side. For example, in the word "abba", we can apply the rule "bb" -> "", giving "aa".

So, the idea is that given any word in the generator symbols, we repeatedly apply the rewrite rules until we can go no further. The hope is that if two words represent the same group element, then we will end up with the same word after rewriting. We'll see later that there's a bit more to do before that will work, but for the moment, let's at least write some Haskell code to do the string rewriting.

So the first thing we are going to need to do is try to find the left hand side of a rule as a subword within a word. Actually, we want to do a bit more than that - if X is our word, and Y the subword, then we want to find the A and B such that X = AYB.




import qualified Data.List as L

splitSubstring xs ys = splitSubstring' [] xs where
    splitSubstring' ls [] = Nothing
    splitSubstring' ls (r:rs) =
        if ys `L.isPrefixOf` (r:rs)
        then Just (reverse ls, drop (length ys) (r:rs))
        else splitSubstring' (r:ls) rs


Then if our rewrite rule is L -> R, then a single application of the rule consists in replacing L by R within the word:




rewrite1 (l,r) xs =
    case xs `splitSubstring` l of
    Nothing -> Nothing
    Just (a,b) -> Just (a++r++b)


Okay, so suppose we have a rewrite system (that is, a collection of rewrite rules), and a word. Then we want to repeatedly apply the rules until we find that no rule applies:




rewrite rules word = rewrite' rules word where
    rewrite' (r:rs) xs =
        case rewrite1 r xs of
        Nothing -> rewrite' rs xs
        Just ys -> rewrite' rules ys
    rewrite' [] xs = xs


For example:

> :load Math.Algebra.Group.StringRewriting
> rewrite [("aaaa",""),("bb",""),("aaab","ba")] "abba"
"aa"


So far, so good. However, there are some problems with the rewrite system that we constructed above. Suppose that the word we wanted to reduce was "aaabb".
If we apply the rule "aaab" -> "ba", then we have "aaabb" -> "bab".
However, if we apply the rule "bb" -> "", then we have "aaabb" -> "aaa".
Neither "bab" nor "aaa" reduces any further. So we have two problems:
- The same starting word can end up at different end words, depending on the order in which we apply the rules
- We can see from the example that the words "bab" and "aaa" actually represent the same element in our group, but our rewrite system can't rewrite either of them

What can we do about this? Well here's an idea. Let's just add "bab" -> "aaa" as a new rewrite rule to our system. We know that they are equal as elements of the group, so this is a valid thing to do.

That's good, but we still have problems. What about the word "aaaab"?
If we apply the rule "aaaa" -> "", then "aaaab" -> "b"
On the other hand, if we apply the rule "aaab" -> "ba", then "aaaab" -> "aba"

So let's do the same again, and add a new rule "aba" -> "b".

What we're doing here is called the Knuth-Bendix algorithm. Let's take a step back. So in each case, I found a word that could be reduced in two different ways. How did I do that? Well, what I was looking for is two rules with overlapping left hand sides. That is, I was looking for rules L1 -> R1, L2 -> R2, with
L1 = AB
L2 = BC
A pair of rules like this is called a critical pair. If we can find a critical pair, then by looking at the word ABC, we see that
ABC = (AB)C = L1 C -> R1 C
ABC = A(BC) = A L2 -> A R2
So we are justified in adding a new rule R1 C -> A R2

So the Knuth-Bendix algorithm basically says, for each critical pair, introduce a new rule, until there are no more critical pairs. There's a little bit more to it than that:
- We want the rewrite system to reduce the word. That means that we want an ordering on words, and given a pair, we want to make them into a rule that takes the greater to the lesser, rather than vice versa. The most obvious ordering to use is called shortlex: take longer words to shorter words, and if the lengths are equal, use alphabetical ordering.
- Whenever we introduce a new rule, it might be that the left hand side of some existing rule becomes reducible. In that case, the existing rule becomes redundant, since any word that it would reduce can now be reduced by the new rule.

Here's the code:




-- given two strings x,y, find if possible a,b,c with x=ab y=bc
findOverlap xs ys = findOverlap' [] xs ys where
    findOverlap' as [] cs = Nothing
    findOverlap' as (b:bs) cs =
        if (b:bs) `L.isPrefixOf` cs
        then Just (reverse as, b:bs, drop (length (b:bs)) cs)
        else findOverlap' (b:as) bs cs

shortlex x y = compare (length x, x) (length y, y)

ordpair x y =
    case shortlex x y of
    LT -> Just (y,x)
    EQ -> Nothing
    GT -> Just (x,y)

knuthBendix1 rules = knuthBendix' rules pairs where
    pairs = [(lri,lrj) | lri <- rules, lrj <- rules, lri /= lrj]
    knuthBendix' rules [] = rules
    knuthBendix' rules ( ((li,ri),(lj,rj)) : ps) =
        case findOverlap li lj of
        Nothing -> knuthBendix' rules ps
        Just (a,b,c) -> case ordpair (rewrite rules (ri++c)) (rewrite rules (a++rj)) of
                        Nothing -> knuthBendix' rules ps -- they both reduce to the same thing
                        Just rule' -> let rules' = reduce rule' rules
                                          ps' = ps ++
                                                [(rule',rule) | rule <- rules'] ++
                                                [(rule,rule') | rule <- rules']
                                      in knuthBendix' (rule':rules') ps'
   reduce rule@(l,r) rules = filter (\(l',r') -> not (L.isInfixOf l l')) rules


For example:


> knuthBendix1 [("aaaa",""), ("bb",""), ("aaab","ba")]
[("baa","aab"),("bab","aaa"),("aba","b"),("aaaa",""),("bb",""),("aaab","ba")]


A few words about the Knuth-Bendix algorithm
- It is not guaranteed to terminate. Every time we introduce a new rule, we have the potential to create new critical pairs, and there are pathological examples where this goes on forever
- The algorithm can be made slightly more efficient, by doing things like choosing to process shorter critical pairs first. In the HaskellForMaths library, a more efficient version is given, called simply "knuthBendix"

Back to the example. So Knuth-Bendix has found three new rules. The full system, with these new rules added, has no more critical pairs. As a consequence, it is a confluent rewrite system - meaning that if you start at some given word, and reduce it using the rules, then it doesn't matter in what order you apply the rules, you will always end up at the same word. This word that you end up with can therefore be used as a normal form.

This allows us to "solve the word problem" for this group. That is, given any two words in the generator symbols, we can find out whether they represent the same group element by rewriting them both, and seeing if they end up at the same normal form. For example:


> let rules = knuthBendix [("aaaa",""), ("bb",""), ("aaab","ba")]
> rewrite rules "aaaba"
"aab"
> rewrite rules "baabb"
"aab"
> rewrite rules "babab"
"b"


So we see that "aaaba" and "baabb" represent the same group element, whereas "babab" represents a different one. (If you want, you could go back and check this using the original permutations.)

We can even list (the normal forms of) all elements of the group. What we do is start with the empty word (which represents the identity element of the group), and then incrementally build longer and longer words. At each stage, we look at all combinations that can be formed by pre-pending a generator symbol to a word from the preceding stage. However, if we ever come across a word which can be reduced, then we know that it - and any word that could be formed from it at a later stage - is not a normal form, and so can be discarded. Here's the code:




nfs (gs,rs) = nfs' [[]] where
    nfs' [] = [] -- we have run out of words
    nfs' ws = let ws' = [g:w | g <- gs, w <- ws, (not . isNewlyReducible) (g:w)]
              in ws ++ nfs' ws'
    isNewlyReducible w = any (`L.isPrefixOf` w) (map fst rs)

elts (gs,rs) = nfs (gs, knuthBendix rs)


For example:


> elts (['a','b'], [("aaaa",""), ("bb",""), ("aaab","ba")])
["","a","b","aa","ab","ba","aaa","aab"]


As expected, we have eight elements.

That's enough for now. Next time (hopefully) I'll look at some more examples.

5 comments:

  1. Great post. I had to concentrate but I think I got all of it :). And thanks for writing the Haskell for maths library.

    ReplyDelete
  2. Isn't it strange for

    knuthBendix1 [("abc","x"),("b","y")]

    to give the same rules

    [("abc","x"),("b","y")]

    back?

    ReplyDelete
  3. Regarding knuthBendix1 [("abc","x"),("b","y")]:

    So in the post, I took a few shortcuts in order to avoid cluttering the presentation with details. You can think of knuthBendix1 as implementing the "inner" Knuth-Bendix algorithm. For a full implementation, you should pre-process the inputs in two ways:
    1. Make sure that each rule is the right way round - ie going from greater to lesser in shortlex order
    2. Make sure that all the rules are reduced with respect to one another

    In your example, the problem is that the the b -> y rule is the wrong way round - it should be y -> b.
    If we look at the similar example xyz -> a, y -> b, then this has the problem that the first rule is not reduced with respect to the second - it should be xbz -> a.

    Instead of knuthBendix1, you are recommended to use knuthBendix, because
    i. It is faster
    ii. It solves problem 2 for you
    iii. In the next release (0.2.2), it will also solve problem 1 for you.

    ReplyDelete
  4. Question, why do remove rules with reducible left sides? What if they have critical pairs with other rules?

    ReplyDelete
  5. When we remove a rule with a reducible left side, the pairs involving that rule (which may or may not be critical) have already been added into the queue of pairs that we're going to process (in knuthBendix1). Suppose that some pair was critical. When we come to consider it, we first reduce it using the rules we now have. It may or may not still be critical. I think the point is just that we do still consider it, we haven't lost it.

    ReplyDelete

Followers