Monday 10 January 2011

The free vector space on a type, part 2

Last time, I defined the free k-vector space over a type b:

data Vect k b = V [(b,k)]
Elements of Vect k b represent formal sums of scalar multiples of elements of b, where the scalars are taken from the field k. (For example, V [(E1,5),(E3,2)] represents the formal sum 5E1+2E3.) Thus b is the basis for the vector space.

We saw that there is a functor (in the mathematical sense) from the category Hask (of Haskell types and functions) to the category k-Vect (of k-vector spaces and k-linear maps). In maths, we would usually represent a functor by a capital letter, eg F, and apply it to objects and arrows by prefixing. For example, if a, b are objects in the source category, then the image objects in the target category would be called F a and F b. If f :: a -> b is an arrow in the source category, then the image arrow in the target category would be called F f.

Haskell allows us to declare a type constructor as a Functor instance, and give it an implementation of fmap. This corresponds to describing a functor (in the mathematical sense), but with a different naming convention. In our case, we declared the type constructor (Vect k) as a Functor instance. So the functor's action on objects is called (Vect k) - given any object b in Hask (ie a Haskell type), we can apply (Vect k) to get an object Vect k b in k-Vect (ie a k-vector space). However, the functor's action on arrows is called fmap - given any arrow f :: a -> b in Hask (ie a Haskell function), we can apply fmap to get an arrow fmap f :: Vect k a -> Vect k b in k-Vect (ie a k-linear map).

Haskell allows us to declare only type constructors as functors. In maths, there are many functors which are not of this form. For example, the simplest is the forgetful functor. Given any algebraic category A, we have the forgetful functor A -> Set, which simply forgets the algebraic structure. The forgetful functor takes the objects of A to their underlying sets, and the arrows of A to the underlying functions.

For example, in our case, the forgetful functor k-Vect -> Hask consists in forgetting that the objects Vect k b are vector spaces (with addition, scalar multiplication etc defined), and considering them just as Haskell types; and forgetting that the arrows Vect k a -> Vect k b are linear maps, and considering them just as Haskell functions.

(Notice that when working in Haskell, the category Hask acts as a kind of stand-in for the category Set. If we identify a type with its inhabitants (which form a set), then Hask is something like the computable subcategory of Set.)


So we actually have functors going both ways. We have the free functor F: Hask -> k-Vect - which takes types and functions to free k-vector spaces and linear maps. And we have the forgetful functor G: k-Vect -> Hask - which takes free k-vector spaces and linear maps, and just forgets their algebraic structure, so that they're just types and functions again.

Note that these two functors are not mutual inverses. (G . F) is not the identity on Hask - indeed it takes b to Vect k b, both considered as objects in Hask (and similarly with arrows). The two functors are however adjoint. (I'm not going to explain what this means, but see Wikipedia, or most books on category theory.)

Whenever we have an adjunction, then in fact we also have a monad. Here's the definition for our case:

instance Num k => Monad (Vect k) where
    return a = V [(a,1)]
    V ts >>= f = V $ concat [ [(b,y*x) | let V us = f a, (b,y) <- us] | (a,x) <- ts]

This monad is most easily understood using the monad as container analogy. A free k-vector space over b is just a container of elements of b. Okay, so it's a slightly funny sort of container. It most resembles a multiset or bag - an unordered container in which you can have more than one of each element. However, free k-vector spaces go further. A free Q-vector space is a container in which we're allowed to have fractional or negative amounts of each basis element, such as 1/2 e1 - 3 e2. In a free C-vector space, we're even allowed imaginary amounts of each element, such as i e3.

These oddities aside though, free k-vector spaces are monads in much the same way as any other container. For example, let's compare them to the list monad.
- For a container monad, return means "put into the container". For List, return a is [a]. For k-Vect, return a is 1 a (where 1 is the scalar 1 in our field k).
- For container monads, it's most natural to look next at the join operation, which combines a container of containers into a single container. For List, join = concat. For k-Vect, join combines a linear combination of linear combinations into a single linear combination (in the obvious way).
- Finally there is bind or (>>=). bind can be defined in terms of join and fmap:
x >>= f = join ((fmap f) x)
For lists, bind is basically concatMap. For k-Vect, bind corresponds to extending a function on basis elements to a function on vectors "by linearity". That is, if f :: a -> Vect k b, then (>>= f) :: Vect k a -> Vect k b is defined (in effect, by structural induction) so as to be linear, by saying that (>>= f) 0 = 0, (>>= f) (k a) = k (f a), (>>= f) (a + b) = f a + f b.

So k-Vect is like just a strange sort of list. Think of them as distant cousins. Incidentally, this is not only because they're both containers (if you believed my story about k-Vect being a container). There's another reason: Both the List and k-Vect monads arise from free-forgetful adjunctions. In the case of lists, the list datatype is the free monoid, and the list monad arises from the free-forgetful adjunction for monoids.

That's most of what I wanted to say for now. However, there's one small detail to add. Last time we defined a normal form for elements of Vect k b, in which the basis elements are in order, without repeats, and none of the scalars are zero. In order to calculate this normal form, we require an Ord instance for b. Unfortunately, Haskell doesn't let us specify that when defining the Monad instance for (Vect k). So whenever we use (>>=), we should call nf afterwards, to put the result in normal form.

For this reason, we define a convenience function:

linear :: (Ord b, Num k) => (a -> Vect k b) -> Vect k a -> Vect k b
linear f v = nf $ v >>= f

Given f :: a -> Vect k b, linear f :: Vect k a -> Vect k b is the extension of f from basis elements to vectors "by linearity". Hence, linear f is guaranteed to be linear, by construction. We can confirm this on an example using the QuickCheck properties we defined last time.

> let f (E 1) = e1; f (E 2) = e1 <+> e2; f _ = zero
> (linear f) (e1 <+> e2)
2e1+e2
> quickCheck (prop_LinearQn (linear f))
+++ OK, passed 100 tests.


Acknowledgement: I'm partially retreading in Dan Piponi's steps here. He first described the free vector space monad here. When I come to discuss the connection between quantum algebra and knot theory, I'll be revisiting some more material that Dan sketched out here.

Exercise for next time: What does Vect k (Either a b) represent? What does Vect k (a,b) represent?

2 comments:

  1. So, why exactly will we really need to keep on calling nf here? I can see two reasons immediately this might be the case:
    1. We're really computing (>>=) as a list concatenation for the terms rather than doing any sort of summation at all in this step. This is potentially a valid thing to do — I know Afra Zomorodian has used something like that to speed up Gröbner basis style computations of persistent homology… But in my experience, you'll want to know that you'll be doing enough of the right kinds of operations for this to pay off.
    2. We're worrying about applying subsequent lifted functions to “too many” basis elements. This seems like laziness should “magically” take care of, as long as we take a coefficient known to be 0 to be sufficient reason not to look at (as in forcing strictness for) its corresponding basis element.

    ReplyDelete
  2. Why call nf?
    As you say, it probably depends what we're doing. I'm thinking of the normal form as the true identity of the vector, so I feel we definitely need to call nf at the end of any extended calculation (before "show"). In addition, with the current Eq instance, we need to call nf before (==).
    It's not strictly necessary to call nf after every step in a calculation, but I suspect it will often be more efficient to do so.
    As a simple example, consider the following:
    > let f = \b -> case b of E 1 -> e1 <+> e2; E 2 -> e2 <+> (-1) *> e1
    > e1 >>= f >>= f >>= f >>= f
    e1+e2-e1+e2-e1-e2-e1+e2-e1-e2+e1-e2-e1-e2-e1+e2
    > nf it
    -4e1

    ReplyDelete

Followers