IMPORTANT: Note that the implementation herein is severely flawed. Do
not use this.
One subject I spent a lot of time thinking about this spring was taking
tensor products of A∞-algebras. This turns out to actually
already being solved - having a very combinatorial and pretty neat
solution.
Recall that we can describe ways to associate operations and homotopy of
associators by a sequence of polyhedra Kn, n=2,3,.., called the
associahedra. An A∞-algebra can be defined as being a map from
the cellular chains on the Associahedra to n-ary endomorphisms of a
graded vector space.
If this was incomprehensible to you, no matter for this post. The
essence is that by figuring out how to deal with these polyhedra, we can
figure out how to deal with A∞-algebras.
A tensor product of algebras is basically the direct product of the
polyhedra, and we can get precisely the thing we want if only we find a
map from the cellular chains on Kn to the cellular chains on
KnxKn fulfilling a few neat properties.
This map is what is known in the literature as the Saneblidze-Umble
diagonal from the people who presented it in a paper from the late 90s.
They describe in that paper, and more clearly in a later paper, how to
actually get hold of one map with the right properties. The road goes
over basic matrices, derived matrices and connecting matrices to trees
describing associations. I'd like to take this opportunity to work
through the implementation I just wrote of an enumerator for these,
displaying the code and discussing it a little bit.
This post will be written as literate Haskell, so copy the text into a
file, and then just save it as SaneblidzeUmble.lhs and you're ready
to go.
So, we will build a module named SaneblidzeUmble that shall contain the
code:
> module SaneblidzeUmble where
We're going to juggle a lot with lists, and a little with potentially
failing calculations, so we're going to include a couple of standard
utility function sets too.
> import Data.List
> import Data.Maybe
Permutations and staircase matrices
The issue we'll want to solve in the end is to enumerate all the terms
entering into a diagonal on KnxKn for one fixed n.
These terms are, using the later to be mentioned terms, in
correspondence to derived matrices of staircase matrices with entries
from [1,..,n-1]. A staircase matrix is one with one entry for each
diagonal (parallel to the main diagonal), and such that along each row
and column, all entries are adjacent and strictly increasing down and to
the right.
As an example, we have the matrix
.14
25.
3..
Given a permutation of [1,..,n-1], we can write the images in the order
they occur, and then just take this as a description of the entries from
lower left to upper right of a staircase matrix.
It turns out that the basic matrices involved in this process are in
bijection to permutations of [1,..,n-1] by placing the digits into the
matrix, going up whenever the next entry is lower, and going to the
right whenever it is larger.
Thus, the example above is given by the permutation (32514).
So, to handle this, we need to be able to list all permutations as well
as pick out all rising and all falling part runs of a permutation
> type Permutation [Int]
> permutations :: Int -> [Permutation]
> permutations n = permuteList [1..n]
> permuteList :: Eq a => [a] -> [[a]]
> permuteList [] = []
> permuteList [a] = [[a]]
> permuteList list = concatMap (\x -> map (x:) (permuteList (l \\ [x]))) list
This is almost readable as it stands. The permutations of [1..n] are
just the permutations of the list [1..n].
There are no permutations of an empty list, the list with one element
has one permutation of itself, and for any larger list we pick out one
element at the time, permute the rest and prepend this element. The
collected results are compiled into a single long list and returned.
Solves, by recursion, neatly and with a lot of connection to the
mathematical thought behind it our problems.
Now, for the chopping permutations into monotonic runs, I'll solve it
all in a manner slightly decreasing the amount of code repetition. We
have a "factory function"
> monotonic :: (a -> a -> Bool) [a] -> [[a]]
> monotonic _ [] = []
> monotonic _ [x] = [[x]]
> monotonic lt (x:y:etc) = if lt x y
> then (x:s):ss
> else [x]:(s:ss)
> where (s:ss) = monotonic lt (y:etc)
and two use cases of this
> risers = monotonic (<=)
> fallers = monotonic (>=)
Using this, we can build the matrix out of a given permutation. The only
really interesting thing for the diagonal about such a matrix is the
contents of the rows and of the columns, so we shall just use that. This
means we can represent a matrix as the list of rows and the list of
columns. Since row and columns "look basically the same", they'll be
represented by the same type. So we have
> type Row [Int]
> buildMatrix :: Permutation -> ([Row],[Row])
> buildMatrix p = (map sort (fallers p), reverse (risers p))
The reason we use map sort and reverse is to make the result fit with
the conventions chosen by Saneblidze and Umble. Note: when actually
reading the results, users should take care to read the Columns
backwards.
Deriving staircase matrices
Deriving the matrices is done with so called moves. We have a downward
move, and a rightward move - both consisting of picking a subset of
elements in one row or one column, and pushing them over to the next -
given the condition that all the moved elements have to be larger than
the largest they move over to. They also have to move to empty
positions, but the ordering conditions for the staircase matrices will
guarantee that if they really are larger, then the corresponding
position is already empty.
So, we can write a a function that gives us all the subsets of the first
column/row that might possibly be moved to the second in a given
column/row-list of a matrix
> listMovable :: [Row] -> [[Int]]
> listMovable [] = []
> listMovable [a] = []
> listMovable (a:b:as) = filter (not.null) $ subsets (filter (>(m b)) a) where
> m [] = 0
> m b = maximum b
This function picks out all the subsets of elements that fulfill the
requirement of being larger than the largest element of the next row.
But hang on - this subset-function actually isn't defined yet. No
worries - it's easily done:
> subsets :: [a] -> [[a]]
> subsets [] = [[]]
> subsets (a:as) = map (a:) (subsets as) ++ subsets as
Again, we define everything recursively, relying on a base case of
"blinding" simplicity, and end up with a very mathematical looking
description. Here, the subsets of the empty set is just the empty set
itself, and the subsets of any set with one element existing in it
consists of the subsets containing that element and those that do not
contain it. We simply list first those with and then those without.
Now, for the actual deriving move, we might run into a host of trouble,
so we'll just let it be able to return an error condition. It first
checks a few possible error cases, and then writes down the actual move
by picking out the rows it needs, modifying those, and putting it all
together again in the end.
Supposing we have some set m we want to move, and we have a list of
the rows where it could be moved, then we'll want to first check whether
all of m is in a single row together. Thus, for each row, we pick out
all elements that are in our set m. But we don't really care for what
the elements are - we only need the number in each. So we compile a list
of the lengths of these filtered out pieces.
We shall require the number of rows that have all the pieces we want to
move to be 1, and also for that single row to have a good position
within the matrix.
> moveDerivate :: [Row] -> [Int] -> Maybe [Row]
> moveDerivate rows [] = Nothing
> moveDerivate rows m = returnVal where
> movableInRow = map (filter (flip elem m)) tr
> numberMovableInRow = map length movableInRow
> countRowsWithMovable = length (filter (==length m) numberMovableInRow)
> indexRow = findIndex (==length m) numberMovableInRow
> returnVal
> | countRowsWithMovable /= 1 = Nothing
> | isNothing indexRow = Nothing
> | index > length rows - 2 = Nothing
> | minimum m < maximum (rows !! index) = Nothing
> | otherwise = moveThem rows m index
> where
> index = fromJust indexRow
> moveThem rows m idx = Just $
> (take index rows) ++
> [(rows !! index) \\ m] ++
> [(sort (rows !! (index + 1) ++ m)] ++
> (drop (index+2) rows)
That monstrous expression at the end is known as a guard expression. It
works through the cases until something actually is true, and then
returns whatever is after the equal sign as the value. The conditions I
listed weed out all the pathologies, and lets the program pick out
something sensible whenever possible.
So, we can list all possibilities of one row, and we can move them. This
should be enough to simply generate all derived matrices, right? Let's
start simple. Let's start with all we can do on one side of the pair -
either all possible down moves or all possible right moves (they are
essentially the same anyway...)
> derivedRows :: [Rows] -> [[Rows]]
> derivedRows m = derivedAcc [] [m] where
> derivedAcc out [] = out
> derivedAcc out (q:queue) = derivedAcc (out++[q]) (queue++rest) where
> movables = (concatMap listMovable . tails) q
> rest = mapMaybe (moveDerivate q) movables
Guiding you through this piece of code - when encountered with a list of
rows, we look at each starting point, list all the possibly movable sets
at that point, and move the corresponding sets, placing all the results
at the back of a queue that we work through. Once the queue is empty, we
know we have worked through all the results from all the moves we could
find, and that no others can be found.
So, if we have our pair of row-lists? We first move to the right, and
then down. At each point, we keep the other half fixed, and simply add
it on to all the results of moving in the first part.
> derivedMatrices :: ([Rows],[Rows]) -> [([Rows],[Rows])]
> derivedMatrices matrix@(m1,m2) = nub (derivedFirst [] [matrix]) where
> derivedFirst matrices [] = derivedSecond [] matrices
> derivedFirst out ((q1,q2):queue) = derivedFirst (out++new) queue where
> new = map (flip (,) q2) (derivedRows q1)
> derivedSecond matrices [] = matrices
> derivedSecond out ((q1,q2):queue) = derivedSecond (out++new) queue where
> new = map ((,) q1) (derivedRows q2)
This first works through one direction, taking all derived rows, and
tacking on the constant second half, then when that's done, changes to
the same but for the other half of the pair.
Now, we're almost able to state what a specific diagonal looks like.
There is only one more thing to handle: we might not want the results of
a particular computation. More specifically, we recognize something we
do not want by it having a row that isn't consecutive, not even when
including previously occurring rows. So, we need to check that.
> checkTree :: [Row] -> Bool
> checkTree rows = checkAcc [] rows where
> checkAcc seen [] = True
> checkAcc seen ([]:morerows) = checkAcc seen morerows
> checkAcc seen (row:morerows)
> | isOK = checkAcc newseen morerows
> | otherwise = False
> where
> newseen = seen ++ row
> minimal = minimum newseen
> maximal = maximum newseen
> length = maximal-minimal+1
> isOK = (length == length newseen)
Now it's just a matter of putting the pieces together
> diagonalTerms :: Int -> [([Row],[Row])]
> diagonalTerms n = filter checkTrees allMatrices where
> checkTrees (t1,t2) = checkTree t1 && checkTree (reverse t2)
> allMatrices = concatMap derivedMatrices matrices
> matrices = map buildMatrix (permutations n)
Previously, a student of Ron Umble has done an implementation of the
same task in C++. This cannot, according to Umble, handle anything
larger than n=5. This Haskell implementation, while being surprisingly
short and readable, has on my (64 bit, modern, high memory) workstation
been able to meet the following execution parameters:
n |
number of terms |
execution time |
memory used |
5 |
46 |
0.02 s |
4M |
6 |
124 |
0.33 s |
62M |
7 |
335 |
8.77 s |
1.9G |
8 |
911 |
1020 s |
441G |