**Simplicial objects**

A simplicial object in a category is a collection of objects $C_n$ together with n maps $d_j:C_n\to C_{n-1}$ and n maps $s_j:C_n\to C_{n+1}$ subject to a particular collection of axioms.

The axioms are modeled on the prototypical simplicial structure: if $C_n$ is taken to be a (weakly) increasing list of integers, $d_j$ skips the $j$th entry, and $s_j$ repeats the $j$th entry. For the exact relations, I refer you to Wikipedia.

**Classifying spaces**

Take a category C. We can build a simplicial object $C_*$ from C by the following construction:

$C_n$ is all sequences $c_0\xrightarrow{f_0}c_1\xrightarrow{f_1}\dots\xrightarrow{f_{n-1}}c_n$ of composable morphisms in $C$.

$d_j$ composes the $j$th and $j+1$st morphisms. $d_0$ drops the first morphism and $d_n$ drops the last morphism.

$s_j$ inserts the identity map at the $j$th spot.

We can build a topological space from a simplicial object: dimension by dimension, we glue the product $\Delta^{n-1}\times C_n$ of an $n-1$-dimensional simplex and the object $C_n$ to the lower dimensional construction — with the face maps dictating the gluing step.

**Classifying space of Hask**

Consider the category of Haskell types and functions. The classifying space here will have an $n$-dimensional simplex for any sequence of $n+1$ composable functions; glued to the lower-dimensional simplices by composing adjacent functions.

I honestly am not entirely sure what structure, if any, can be found here: this might model $\beta$-reductions; or it might model something utterly trivial — I wonder what, if anything, can be found here, but I currently have no answers.

**Classifying space of a Monad**

We can do the same thing in the category of endofunctors. Recall that a Monad is an endofunctor $T$ with natural transformations $b:T^2\to T$ and $u:1\to T$. Just like any monoid, there is a category structure that fits in this framework. The simplicial object here is $T_*$ with $T_n=T^(n+1)$. The face maps $d_j$ compose two adjacent applications of $T$ using $b$. Degeneracy maps $s_j$ introduce a new $T$ by applying $u$.

For an example, let’s consider the Writer monad, for a monoid `w`

:

newtype Writer w a = Writer { runWriter :: (a, w) }

return a = (a, mone)

bind ((a, w0), w1) = (a, w0 `mappend` w1)

The endofunctor here is `Writer w :: * -> *`

. Iterating this endofunctor gets us `(Writer w)^n a = (((...(a,w),w),w),w)...,w)`

. A typical value would be something like

(((...(a,w0),w1),w2),...,wn) :: (Writer w)^n a

We get a simplicial structure here by letting the $j$th face map apply `mappend`

to `wj`

and `w(j+1)`

. The $j$th degeneracy map creates an instance of `mone`

at the $j$th spot.

Let’s get more concrete. Consider ` Writer Any Bool`

. Recall that `w0 `mappend` w1 = w0 `or` w1`

. For any value of of type `x::a`

, there are $2^n$ values in `(Writer w)^n a`

: one for each possible sequence of input to the Writer monad. Each of these produces an $n-1$-dimensional simplex, that glues to the lower dimensional simplices by fusing neighboring points.

In `(Writer Any Bool)^0`

there are two vertices: `(x,False)`

and `(x,True)`

.

In `(Writer Any Bool)^1`

there are four edges: `(x,True, True)`

, `(x,True, False)`

, `(x,False, True)`

and `(x,False, False)`

.

These connect to the vertices like this:

In `(Writer Any Bool)^2`

there are eight triangles. They glue as follows — each triangle gets its three edges in order:

`FFF`

to `FF, FF, FF`

`FFT`

to `FT, FT, FT`

`FTF`

to `TF, TF, FT`

`FTT`

to `TT, TT, FT`

`TFF`

to `FF, TF, TF`

`TFT`

to `FT, TT, TT`

`TTF`

to `TF, TF, TT`

`TTT`

to `TT, TT, TT`

It continues like this as we add higher and higher dimensional faces. Any history of $n$ writes to the monad produces an $n-1$-dimensional object, that connects to lower dimensions by looking at how the history could have been contracted by the monoid action at various points.

For the monad `Writer All Bool`

, the same vertices, edges, triangles, et.c. show up, but they connect differently:

`FF`

to `F, F`

`FT`

to `T, F`

`TF`

to `F, F`

`TT`

to `T, T`

`FFF`

to `FF, FF, FF`

`FFT`

to `FT, FT, FF`

`FTF`

to `TF, FF, FF`

`FTT`

to `TT, FT, FT`

`TFF`

to `FF, FF, TF`

`TFT`

to `FT, FT, TF`

`TTF`

to `TF, TF, TF`

`TTT`

to `TT, TT, TT`

**What can we use this for?**

And here, at last, is my reason for writing all this. This construction gets us a topological space, where cells in the space (possibly even points in the space) correspond to possible values of the iterated monad datatype, and where connectivity in the space is driven by the monad. This gives us a geometric, a topological, handle on the programming language structure.

Is this good for anything?

Are there any sort of questions we might be able to approach with this sort of perspective?

I’ll be thinking about it, but if anyone out there has any sort of idea I’d love to hear from you.

Data analysis has played a growing role in politics for many years now; analyzing polling data to predict outcomes of elections is perhaps the most well-known application.

A different approach that has gotten more and more traction lately is to analyze the voting behaviour of elected representatives as a way to understand the inner workings of parliaments, and to monitor the elected representatives to make sure they behave as they once promised. Sites like GovTrack and VoteView bring machine learning and data analysis tools to the citizens, and illustrate and visualize the groupings and behaviour in political administration.

One key step to make parliamentary data accessible for data analysis is to recognize that the data set is inherently geometric; the sequence of votes cast in a parliamentary session can be seen as coordinates for a very high-dimensional vector — say +1 for Yea, -1 for Nay, and 0 otherwise. This way, each member of parliament is represented by a vector; and members who agree with each other will be close to each other in the resulting vector space.

We can use dimension reduction techniques on these vectors to find essential structures within parliaments. At a first approach, this tends to uncover the party structure of parliament:

This image shows the US House of Representatives, laid out in 2d by PCA coordinates from their voting records. The span from Democrats (blue) to Republicans (red) is clearly seen.

This plot shows the UK House of Commons during the period 2001-2005, again with 2d positions generated by a PCA dimension reduction on the voting data. Colours are for political parties, Labor (red), Tory (blue), and LibDem (yellow) the dominating parts, and smaller parties in other colours.

Topological Data Analysis is a brand new field of research, where algebraic and combinatorial topology is used to refine geometric methods. Topology encodes «closeness» and provides a robust approach to qualitative features in data sets.

In particular, for the political data, we are using a technique called Mapper. This technique, invented in the Applied Topology group at Stanford, produces easy to analyze topological models induced by point clouds: the points are divided using measurement functions, and then clustered into closely connected groups. Whenever groups overlap, they are connected, producing a simplified model for the space suggested by the data points.

Applying this on topological data, we can discover sub-groups in the US House of Representatives that were not visible by a PCA dimension reduction. Not only that, but we can measure political unrest by the fragmentation of parliament into larger or smaller interest groups.

2009 | 2010 | 2011 |

Mapper analyses of House of Representatives voting records, using Iris from Ayasdi.

]]>Jag har en historia jag hörde idag, som jag skulle vilja ta upp. En god vän (och universitetslärare i matematik) till mig förklarade hur han gör för att hantera fuskare på hans matematiktentor. Han sätter sig med studenten, och föreslår att de går igenom alla förklaringarna för varför det misstänkta fusket inte är fusk — och för varje ursäkt, så bedömer de sannolikheten att just de kunde ha hänt, och räknar samman sannolikheterna.

“Jag missade att vi inte fick ha miniräknare.”

“Ja, okej då. Ska vi säga 50/50 på den?”

“Mmmmm.”

och så där fortsätter det. Till sist, efter lagom många ursäkter, så räknar de ihop sannolikheten att allt verkligen hänt som studenten sagt; med de sannolikheter studenten själv gått med på — och ger studenten motsvarande andel av de poäng han annars hade fått. I utbyte slipper studenten disciplinnämnd och FUSKAR-stämpel och kan ta enbart en körd tenta som straff…

Varenda student som åker fast för fusk går med på det. Varenda en tycker det låter som en bra deal när allt börjar.

Thente tycker inte man behöver någon siffer-intuition, eftersom vi ändå alla har miniräknare. Gäller den åsikten fortfarande när vi diskuterar skattesatser i Almedalen? Gäller den fortfarande när vi försöker utröna om huruvida en farmor som laddat hem Pippi Långstrump från en Torrent åt sitt barnbarn verkligen kostat Hollywood ett par miljoner kronor? Eller är det möjligen så att matematiken fortfarande är relevant eftersom den bidrar med färdighet att tänka på och hantera numeriska storheter?

]]>Write a function

uint64_t bitsquares(uint64_t a, uint64_t b);

such that it return the number of integers in [a,b] that have a square number of bits set to 1. Your function should run in less than O(b-a).

I think I see how to do it in something like logarithmic time. Here’s how:

First off, we notice that we can list all the squares between 0 and 64: these are 0, 1, 9, 16, 25, 36, 49, and 64. The function I will propose will run through a binary tree of depth 64, shortcutting through branches whenever it can. In fact; changing implementation language completely, I wonder if I cannot even write it comprehensively in Haskell.

The key insight I had was that whenever you try to find the number of numbers with a bitcount matching some element of some list within the bounds of 0b0000…0000 and 0b000…01111…11, then it reduces to a simple binomial coefficient — n choose k gives the number of numbers with k bits set among the n last. Furthermore, we can reduce the total size of the problem by removing a matching prefix from the two numbers we test from.

Hence, we trace how many bits off the top agree between the two numbers. We count the set bits among these, subtract them from each representative in the list of squares, giving us the counts we need to hit in the remainder.

Write a’ for a with the agreeing prefix removed, and similarly for b’. Then the total count is the count for the reduced things from a’ to 0b000…01…111 plus the count for the reduced things from 0 to b’. The reduction count for b’ needs to be 1 larger than the one for a’ since in one case, we are working with the prefix before the varying bit increases, and in the other, we work with the prefix after the varying bit increases — the latter count is not *really* from 0 to b’, but this is a useful proxy for the count from 0b0000…010…000 to b’ with the additional high bit set.

In code, I managed to boil this down to:

import Data.Word

import Data.Bits

import Data.List (elemIndices)

```
```bitsquare :: Word64 -> Word64 -> Word64bitsquare a b = bitcountin a b squares -- # integers in [a,b] with square # of 1

s

squares = [1,4,9,16,25,36,49,64] :: [Word64]

allones = [fromIntegral (2^k - 1) | k <- [1..64]]

choose n 0 = 1

choose 0 k = 0

choose n k = (choose (n-1) (k-1)) * n `div` k

popCount :: Word64 -> Word64

popCount w = sum [1 | x <- [0..63], testBit w x]

`-- # integers in [a,b] with 1-counts in counts`

bitcountin :: Word64 -> Word64 -> [Word64] -> Word64

bitcountin a b counts

| a > b = 0

| a == b = if popCount b `elem` counts then 1 else 0 | (a == 0) && (b `elem` allones) = sum [choose n k | n <- [popCount b], k <- c

ounts]

| otherwise = (bitcountin a' low [c-lobits | c <- counts, c>= lobits]) +

(bitcountin hi b' [c-hibits | c <- counts, c>= hibits])

where

agreements = [(testBit a n) == (testBit b n) | n <- [0..63]]

agreeI = elemIndices False agreements

prefixIndex = last agreeI

prefixCount = sum [1 | x <- [prefixIndex..63], testBit a x]

a' = a .&. (2^prefixIndex - 1)

b' = b .&. (2^prefixIndex - 1)

low = 2^prefixIndex - 1

hi = 0

lobits = prefixCount

hibits = prefixCount+1

This meeting will take place in the period July 2-6, at the ICMS in Edinburgh, Scotland. The theme will be applications of topological methods

in various domains. Invited speakers are

J.D. Boissonnat (INRIA Sophia Antipolis) (Confirmed)

R. Van de Weijgaert (Groningen) (Confirmed)

N. Linial (Hebrew University, Jerusalem) (Confirmed)

S. Weinberger (University of Chicago) (Confirmed)

S. Smale (City University of Hong Kong) (Confirmed)

H. Edelsbrunner (IST, Austria) (Confirmed)

E. Goubault (Commissariat à l’énergie atomique, Paris) (Confirmed)

S. Krishnan (University of Pennsylvania) (Confirmed)

M. Kahle (The Ohio State University) (Confirmed)

L. Guibas (Stanford University) (Confirmed)

R. Macpherson (IAS Princeton) (Tentative)

A. Szymczak (Colorado School of Mines) (Confirmed)

P. Skraba/ M. Vejdemo-Johansson (Ljubljana/St. Andrews) (Confirmed)

Y. Mileyko (Duke University) (Confirmed)

D. Cohen (Louisiana State)

V. de Silva (Confirmed)

There will be opportunities for contributed talks. Titles and abstracts should be send to Gunnar Carlsson at gunnar@math.stanford.edu.

The conference website is located at http://www.icms.org.uk/workshops/atmcs5. Those interested should register there as soon as possible so that we can obtain an idea of the number of participants.

For the organizing committee,

Gunnar Carlsson

]]>And I care about the tools I use. I care deeply about the way my citations come out, and I have significant aesthetic opinions on the matter. I like author-date citation styles, much better than the horrible abbreviated alphabet soups so popular in mathematics styles, and much more pleasant to read than [1,2,5-7] as seems to be a dominant style in mathematical literature. If I see (Zomorodian, 2005) or even better (Edelsbrunner-Letscher-Zomorodian, 2000) I’ll know immediately what the reference is about when I read in my own field — whereas a citation of [3] forces me to leaf back to check what they mean.

I have spent a long time either fiddling with natbib, or giving up entirely, or poking around various styles, never quite feeling satisfied with the solutions I ran into. Until today I finally decided to try BibLaTeX on a document I started writing.

And now I wonder why it took me so long to get here.

BibLaTeX is amazing, IMO. It is also currently VERY poorly documented in the blogosphere. The package manual exists, sure, and once you digest it is actually helpful. But when it comes to more tutorial-oriented lightweight introductions, I came up just barely with a very sketchy site from Cambridge that had a few really artificial examples.

So here goes: my own introduction to BibLaTeX.

To use BibLaTeX, you can stick with the same BibTeX database you already carefully crafted. However, the way you interact with it in your document changes.

In your preamble, you’ll want something like

\usepackage[citestyle=authoryear]{biblatex}

\addbibresource{mybibfile.bib}

Then, where you would classically have had

\bibliographystyle{abbrv}

\bibliography{mybibfile}

you instead use the command

\printbibliography

So far, so good. The \cite command works just like expected, and everything seems like we make a few random small changes in how and where we write the bibliography-related components of our document. The real power here comes when you start looking at the details of what BibLaTeX provides, and how you can use it.

- All citation commands come in two versions. One of them will capitalize the first letter if it isn’t already so you can use them at the beginning of a sentence. Thus, there is both \cite{} and \Cite{}.
- Even more than that, there is a host of cite commands:
- \cite, \Cite: cite using the default settings of your current citestyle.
- \parencite, \Parencite: enclose citation in parenthesis (or square brackets for numeric citation styles).
- \footcite: put citation in a footnote.

Some styles allow other interesting citation commands as well:

- \textcite, \Textcite: for use when you want to refer to a citation in running text. Will print author names, and then a citation slug in brackets.
- \smartcite, \Smartcite: gives a footnote if used in the body of the document, and a parencite if used in a footnote.
- \cite*, \parencite*: only prints year (or title) and not author names.

- Speaking of styles, you’ll set those by adjusting the options to the biblatex package inclusion. citestyle=… influences how the inline citations work, and has support for author-year, author-title, with or without
*ibid.*handling, with or without compactification for many citations of the same author groups. Furthermore, there is a nice and mature numeric citation style handling, with optional compacting to [1-3,8] instead of [1,3,8,2], or expansion to individual numbers with no sharing of brackets allowed. There is a couple of different author-abbreviation styles: one that mimics the [VeJ03] styles of the classic BibTeX styles, and one that compacts the author slugs as much as possible. And there are styles specifically for including full bibliography dumps at each citation and for building reading lists.Each such style you set will setup the styles for the bibliography at the end on its own. You can override those choices by adding bibstyle=… to the biblatex package options. These approximately match the styles for citestyle=… with less fiddling with inline appearances.

And finally, there is a lot of control over how the bibliography entries are sorted. Whether to include specific fields in the BibTeX file — including doi and url.

To pick an appropriate style, you really, really should refer to the package manual. Go straight (using the ToC hyperlinks) to 3.3 Standard Styles in the Users Guide section, and the style options are cleanly and clearly described for you.

The results, let me show them to you.

This is all from an extended abstract I’m writing for a conference this summer. I used

\usepackage[citestyle=authoryear-icomp,doi=true,url=true]{biblatex}

\addbibresource{gts2012.bib}

in the preamble, and both \cite and \parencite inline.

Inline citations came out like:

And the full reference lists came out like:

I am noticing that depending on the quality of your database, there might be redundancies in the DOI and URL-enabled references — but you always have to do some cleanup of the references, and it can hardly be blamed on the software at this stage.

Go and try it out yourself. Skim the package manual. Start using it in a few documents yourself.

There are snags when you need to interface with memoir — the manual talks about this. Essentially it boils down to biblatex replacing memoir as far as bibliography styling goes. Other than that, so far, I am only happy with discovering this piece of LaTeX authoring support.

]]>Since I work in mathematics (and, arguably, in the fringes of Theory CS), everything I write is written in LaTeX. This is for one thing *very* helpful, since it means that the actual texts we collaborate on are plain text with markup. Eminently suitable for the toolkit provided by the software community.

As such, I make it a fundamental point to ALWAYS use version control software with my writing projects. Regardless of whether I do it for just myself, or in a team with collaborator, everything is under version control. If my collaborators do not yet know any such systems, I teach them *Mercurial*, my own personal favourite. This has varied success — if my collaborators absolutely refuse, I’ll maintain the version control interactions myself, but I have had some collaborations where my partner is enthusiastic enough to subsequently teach everyone HE works with to do the same.

Lately, I have also started looking into *Git* as well — mainly because of the existence of github, but also because of systems such as Gitolite. These put a web-layer and an easy to use admin layer on top of the source code repository, making it easy to create or modify workgroups and remove almost all the hassle of administering a server yourself to facilitate the collaboration with these tools.

As long as a collaboration is with one or two collaborators, I usually do well enough just typing their names into my email client any time I want to say anything, and I definitely make sure to create a new mailbox folder any time a clear project materializes. However, some of the things I work on materialize in larger groups forming the generic collaboration, and subgroups crystalizing for any particular part.

For these cases, I find a mailing list to be invaluable. Preferably with archiving, so you can send out ideas to the list, and rely both on getting feedback on the — often early and unrefined — ideas, and a searchable cache where you can go back and figure out what the ideas were in the first place. Since I do a lot of my email maintenance on Google anyway, the bigger projects I am in have gotten mailing lists setup that way too — using the very pleasant Google Groups interface to administer and handle the mailing lists.

Wikis, too, are a really nice and productive idea — I haven’t done anything with them myself for running collaborations, but my current workgroup uses a wiki to discuss software design issues.

For quicker turn-around and lower formality than email, I have had very good success with some of my collaborators chatting on Skype or on GTalk about our problems. This requires everyone to be comfortable enough with the text chat as a medium, but can be surprisingly productive.

And then, every so often, you just have to meet up and talk.

Best, of course, is if you happen to get together geographically anyway — so that you can occupy a seminar room and go wild on a blackboard, hashing out any details you need to deal with. This strains everybody’s travel budgets, and eats time in masses, so it is not always feasible.

Next best thing is a video and voice chat, preferably with a whiteboard or at least screen sharing functionality. Good thing is there are a few of these around.

Skype is a tool I’ve used a few times for this. It has video, voice, and screen sharing in the later versions, and is really easy to get up and running and to deal with. The biggest drawback I see with Skype is that you’ll have to pay to get video conferencing and not just video chat, which is a genuine drawback as soon as people are distributed on more than two sites. If there are only two places involved, however, few things are as painless as kicking up a Skype session.

The other tool I’ve used lately is Google+. A lot can be said about this foray into the Social Web, but the part I want to highlight here is the hangout feature. A G+ hangout is a video conference for up to 10 participants. Everyone (who chooses to) streams video, or their own screen if they want to, and in the client you either roam to whoever makes the most noise (hopefully the person speaking at the moment), or you can lock your view to one of the feeds. This combined with tablet software like Microsoft Journal, or even just a paint application is enough to get a whiteboard up and running. I have yet to find a good collaborative whiteboard (though Google Docs and their drawing app probably does a good job), but as long as only one person needs to hold the pen, this solution is marvelously smooth.

These are my collaboration tools. What are yours?

]]>«Phil Harvey wants us to partition {1,…,16} into two sets of equal size so each subset has the same sum, sum of squares and sum of cubes.» — posted by @MathsJam and retweeted @haggismaths.

Sounds implausible was my first though. My second thought was that there aren’t actually ALL that many of those: we can actually test this.

So, here’s a short collection of python lines to _do_ that testing.

`import itertools`

allIdx = range(1,17)

sets = itertools.combinations(allIdx,8)

setpairs = [(list(s), [i for i in allIdx if i not in s]) for s in sets]

def f((s1,s2)): return (sum(s1)-sum(s2), sum(map(lambda n: n**2, s1))-sum(map(lambda n: n**2, s2)), sum(map(lambda n: n**3, s1))-sum(map(lambda n: n**3, s2)))

```
```

`goodsets = [ss for ss in setpairs if f(ss) == (0,0,0)]`

And back comes one single response (actually, two, because the comparison is symmetric).

`[([1, 4, 6, 7, 10, 11, 13, 16], [2, 3, 5, 8, 9, 12, 14, 15]),`

([2, 3, 5, 8, 9, 12, 14, 15], [1, 4, 6, 7, 10, 11, 13, 16])]

Aaaaanyway.

I have grown very interested in photography, as those of you who have seen my photo-a-day blog will have noticed. With the growing interest, I also notice a growing awareness of photography and of pictures, and I start thinking about how I would do things.

Such as this last weekend. There was a wedding. There was a wedding celebration. At one point we ended up down at a beach, watching a fire show. A friend of mine was running out of space on his memory card, so left his camera with me and ran to get the next card.

So I took some photos of the fire dancing (I haven’t gotten access to them myself just yet, they’ll be posted on my photo blog when I get hold of them). And I realized some things about how to photograph fire dancers. I’ll enumerate a few of me recent (though I am convinced, not original) insights here. Worth noticing is that they are pretty much all inherently contradictory, emphasizing the *craft* aspect of photography.

1. **Darkness matters**. Large parts of the beauty of fire photography comes from actually seeing the fire, even from the fire being the main actor in the image. Hence, it helps if the fire light doesn’t have to compete with ambient light. Pick a nice location, wait until late enough in the evening that ambient light is dim if not dark.

2. **Long exposures**. Conveying the motion of the flames in a fire dance gets easier and all the more impressive if each individual flame becomes a bright, vivid streak of light, tracing out the curve of the dance. You get this with long exposures. The longer the better: time really does translate pretty much directly to vivid visuals here.

3. **Sharp facial features**. A dancer dancing blurs. This is kinda the point of the item #2 above. Blurring the fire, and limbs, imbues a sense of motion, a sense of action to the picture. However, blurring the face removes the feeling of humanity more than anything else. Keeping the dancer immobile giving them sharp, recognizable features while still moving their extremities and the fire will make for a truly iconic fire dancing picture.

4. **Slow dances**. Related to all of the above, a slow fire dance will accomplish several things for us as photographers:

a. Dancing slowly means the fire moves slowly. If you’ve ever watched a fire dancer, you’ll notice that when the fire moves slowly, it burns with a large, bright yellow flame, illuminating the area around it. Instead, when moving fast, the fire flickers, burns in a dim hot blue, and illuminates much less.

b. Dancing slowly means the dancer moves less, helping to keep the dancer sharp at the core.

After the performance this weekend, I talked to the performers, and I might be given the chance of making a dedicated photo shoot with them juggling fire. I am already making plans for the photo shoot: how to stage it, what to do and how. Expose for about -1, maybe -2ev, against the background, illuminating the juggler with their fire, but keeping the background visible and interesting. Posing them — with the fire — on a jetty, so that the fire dance is reflected in the water. And asking them to try and keep their face stationary, while twirling fire, and exposing at somewhere in the range of 1/2-3″.

At least that’s my current plan.

]]>This post is an expansion of all the details I did not have a good feeling for when I started with for page 7 of Goerss-Jardine, where the geometric realization of simplicial sets is introduced. The construction works by constructing a few helpful categories, and using their properties along the way. Especially after unpacking the […]]]>

The construction works by constructing a few helpful categories, and using their properties along the way. Especially after unpacking the categorical results G-J rely on, there are quite a few categories floating around. I shall try to be very explicit about which category is which, and how they work.

As we recall, simplicial sets are contravariant functors from the category of ordinal numbers to the category of sets. We introduce the *simplex category* of a simplicial set with objects (simplices) given by maps and a map from to being given by a map in such that .

Interpreting each simplicial simplex as a simplicial set, this can be thought of as the subcategory of the *slice category* over spanned by maps from the simplicial simplices. Any map between simplicial simplices is determined completely by a unique ordinal number map that induces it.

The proof of the isomorphism

taken over all simplices in the simplex category of proceeds using the theorem that every functor from a small category to sets is a colimit of representable functors.

To unpack this statement, we turn to, say, Awodey, where this is Proposition 8.10.

**Proposition 8.10** (Steve Awodey, Category Theory)

For any small category , every object P in the functor category is a colimit of representable functors using the Yoneda embedding .

Specifically, we can choose an index category and a functor such that .

**Proof**

We start by introducing the *category of elements* or *Grothendieck category* of P. This will be our index category for the colimit, and is often written . This category has objects , and arrows given by arrows in such that [Unparseable or potentially dangerous latex formula. Error 6 ].

This is almost like a category of pointed sets, only that we also care about the action of P while we’re at it. Since is a small category, so is , and there is a projection functor given by forgetting about the point, so sending and preserving the arrow forming a morphism of elements.

We also recall the Yoneda embedding in order to progress nicely here:

. The Yoneda lemma tells us that for contravariant , , naturally in and , and works by assigning to the natural transformation which has components . These are defined by their actions on actual maps and we set .

In other words, a contravariant functor to sets corresponds to natural transformations from the Yoneda embedding to the functor itself. An element in the functor image gives rise to a particular such natural transformation by using the element as a *test case*, and using the functor to make the test runnable.

Now, returning to the proof of 8.10, we need to build a colimit that will work as our colimit presentation of P. We will do this by using and . Indeed, for an object , by the Yoneda lemma this corresponds to some natural transformation . These natural transformations form a subcategory of the slice category of over P by the naturality of the Yoneda construction.

So we can build a cocone by taking the map from to be the natural transformation . This is a colim because if we had some other cocone with components , we can produce the unique natural transformation by defined by and recall that natural transformations are, by the Yoneda lemma, the same as elements of .

This was probably all about as headache inducing as anything else to do with Yoneda’s lemma. It’s a result that both in its proof and its applications tends to climb the ladder of nested categorical constructs pretty high.

So let’s get back to the simplicial case, and tease out what this implies for our reading. is a simplicial set, hence a contravariant set-valued functor on . So is our small category.

The category of elements, is the category where objects are for . Morphisms track what happens to under faces and degeneracies. Thus, specifically, by the arguments in example 1.7, we can put in bijective correspondence with the collection of all simplicial n-simplices of X, in the sense that we associate to each the simplicial map as defined on page 6.

Thus, the index category is clear. The category of elements for a simplicial set really is “just” the category of simplices . The representable functors are the for , and so are all on the shape . But these are just the collections of standard simplicial n-simplices.

So our simplicial set is the colimit of simplical n-simplices under the conditions that they obey the face and degeneracy maps in the original simplicial set.

Let’s work this all through on an example. Consider the interval I, given by non-degenerate simplices a, b, x, subject to and . Maps from the simplicial n-simplex to I can hit one of these three simplices, or any degeneracy of these.

There are two maps , namely and . Thus, we start the construction of our colimit by introducing cells and . Any degeneracies are adsorbed by the fact that a simplicial simplex is a simplicial map, hence and so on.

There are three maps , namely:

and

These three choices all come with boundary maps that have to be taken into account in the colimit. Specifically, the two first maps here, the ones that hit degenerate simplices, factor through and respectively. There is a map (in the category ) given by . This map corresponds to the map (as a map of simplicial simplices), which embeds the simplex into . Thus, this embedding along degeneracies is a map in the index category of the colimit, and therefore leads to an identification of any simplex with all its degeneracies in the colimit. The non-degenerate case gives us a simplex .

Similarly, the colimit forces the identifications and , finishing the description of as a colimit.

Now, we have a nice candidate for the realization of a single simplicial n-simplex: the topological n-simplex defined in Example 1.1. This is used to define the geometric realization of a simplicial set through “simply” forcing the realization functor to be compatible with this colimit structure. Thus, we define the geometric realization of a simplicial set to be the colimit of the topological n-simplices over the same index category that we used to display as a colimit.

The first *really* nice property about realization is that it is left adjoint to the singular functor. Proving this is a sequence in abstract symbol manipulation and remembering what everything at each step of the way actually means. Thus, say we have a simplicial set X and a topological space Y. Adjointness means there is an isomorphism

which is natural in both X and Y.

Now, let’s consider the left hand side.

by definition of the geometric realization.

because contravariant representable functors map colimits to limits. This is Awodey 5.29, and sits very well with our intuition from sets, vector spaces, and just about anywhere else.

Now, remember that a map in is a simplex in the singular set , and that as we saw above, a simplicial set is the colimit of its simplices. Thus, we get

and then finally, we can re-introduce the colimit inside the morphism set

Each step along the way is natural, which finishes the proof.

]]>In my workgroup at Stanford, we focus on *topological data analysis* — trying to use topological tools to understand, classify and predict data.

Topology gets appropriate for qualitative rather than quantitative properties; since it deals with *closeness* and not *distance*; also makes such approaches appropriate where distances exist, but are ill-motivated.

These approaches have already been used successfully, for analyzing

- physiological properties in Diabetes patients
- neural firing patterns in the visual cortex of Macaques
- dense regions in of 3×3 pixel patches from natural (b/w) images
- screening for CO
_{2}adsorbative materials

In a project joint with Gunnar Carlsson (Stanford), Anders Sandberg (Oxford) (and more collaborators), we act on the belief that political data can be amenable to topological analyses.

We currently look at 3 types of data:

- Vote matrices from parliament:

each column is a member of parliament

each row is a rollcall

we codify votes numerically: +1/-1 for Yea/NayAnd then we can do data analysis either on the set of members of parliament in the space of rollcalls, or on the set of rollcalls in the space of members of parliament.

- Co-sponsorship graphs

Nodes are members of parliament.

A directed edge goes from each co-sponsor to the main sponsor of a particular bill, for all bills.

For Swedish politics, parties end up being strongly connected internally, with coalition mediators appearing in the data. - Shared N-gram graphs

Some turns of phrase, some talking points, get introduced and then re-used by other members of parliament.

We believe that an analysis of N-grams of parliamentary speeches may well give a data analyst tools to capture memetic drift and spread within partliament.

The project is still at an early stage, and the great challenge for us right now is to find value to add — in political science, the grand entrance of classical data analysis was a few decades ago, and much of what can be said about politics with classical data analysis tools has already been said.

Political science discovered data analysis in the 90s, and a flurry of political science data analysis papers followed.

Thus, while illustrative, doing things like a PCA on political vote data is not quite novel.

Doing these PCAs, though, shows us, for instance, that the US house & senate are essentially linear,

and that the votes point cloud sits on most of the boundary of a unit square (above right): at least one party backs every bill that comes to a vote, and the main difference between bills is in how many of the other party join in too.

Already in this kind of analysis, we notice a difference over time — the party line is a much stronger factor in 2009 than it was in 1990:

We can perform similar analyses for the UK:

Here we may notice that the regional parties are pretty much included with the ideologically closest major party. This is a phenomenon we’re going to revisit later on.

With Swedish vote data, we’ve been able to both locate the blocks as essentially grouped under the first two coordinates of a PCA; with lower coordinates seemingly issues-oriented — it’s worth noticing component 5 which separates out C and MP from the other parties, and thus seems to be the axis of environmentalism in Swedish politics.

image courtesy of Anders Sandberg

There are several techniques developed at Stanford for topological data analysis. While my own research has been centered around *persistent homology*, the one I want to present here is different.

Mapper was developed by Gurjeet Singh, Facundo Memoli and Gunnar Carlsson. It builds on combining *nerves* with *parameter spaces*.

**Definition:**

Suppose is a family of open sets. Then the *nerve* is a simplicial complex with vertices the index set I, and a k-simplex if .

**Lemma:** [Nerve Lemma]

If covers a paracompact space X, and all finite intersections are contractible, then X is homotopy equivalent to the nerve of the covering.

Now, if X is a topological space with a coordinate function and Z is (paracompact and) covered by some family of , then the collection of preimages covers X.

So, by subdividing each preimage of a set in the covering of Z into its connected components, we get a covering of X subdividing the cover induced from Z.

Thus, if f is sufficiently wellbehaved and the covering is fine enough, the nerve of these subdivided preimages is homotopy equivalent to X, and we have found a triangulation (up to homotopy) of X.

This particular line of reasoning can be translated into a statistical/data analytic setting. The main difference is that by virtue of persistent topology, connected components correspond to clusters, and so we may start with a point cloud, and then pick out preimages of the cover of our target space, cluster these, and let each cluster correspond to a point, and then introduce simplices according to the intersections.

The process is illustrated in a PDF “animation” here.

This technique has been used already to recover the difference between diabetes type I and II as lobes in the data, apparent in blue here:

We hope to be able to use these techniques to better understand the way parliaments are structured internally.

]]>*In some points, I’ve tried to fill in the most sketchy and un-articulated points with some simile of what I ended up actually saying.
*

Combinatorial species started out as a theory to deal with enumerative combinatorics, by providing a toolset & calculus for formal power series. (see Bergeron-Labelle-Leroux and Joyal)

As it turns out, not only is species useful for manipulating generating functions, btu it provides this with a categorical approach that may be transplanted into other areas.

For the benefit of the entire audience, I shall introduce some definitions.

**Definition**: A *category* C is a collection of *objects* and *arrows* with each arrow assigned a *source* and *target* object, such that

- Each object has its own
*identity arrow*1. - Chains of arrows are associatively
*composable*, with 1 the identity of this composition.

**Examples**: Sets, Finite sets, k-Vector spaces, left and right R-Modules, Graphs, Groups, Abelian groups.

: finite sets with only bijections as morphisms.

**Examples**:

- Category of a monoid.
- Category generated by a graph
- Category of a group (groupoid version of the category of a monoid)
- Category of a poset
- Category of Haskell types and functions.

**Definition**: A *functor* F is a map of categories, in other words, a pair of maps Fo on objects and Fa on arrows, such that the identity arrow maps to identity arrows and compositions of maps map to compositions of their images.

**Examples**:

Constant functor: sends every object to A, every map to 1.

Identity functor: sends objects and arrows to themselves.

Underlying set functor, free monoid/vector space/module/… functors.

We are now equipped to define Species:

**Definition**: A *species of structures* is a functor .

The idea is a set gets mapped to the set of all structures labelled by the elements in the original set. A bijection on labels maps to the *transport of structures* along the relabeling.

**Examples**:

: rooted trees, labels on vertices

: simple graphs, labels on vertices

: connected simple graphs, labels on vertices.

: trees, labels on vertices.

: directed graphs, labels on vertices

: subsets, .

: endofunctions

: involutions

: permutations

: cycles

: linear orders

: sets:

: elements:

: singletons: if and otherwise

: empty set: , otherwise

: empty species: .

In enumerative combinatorics, the power of species resides in the association to each species a number of generating functions:

The generating series of a species F is the exponential formal power series

where we use the convention , and .

Thus:

There are a number other in use for combinatorics, but for my purposes, this is the one I’ll focus on.

The real power, though, emerges when we start combining species, and carry over the combinations to actions on the corresponding power series.

The number of ways to form either, say, a graph or a linear order on a set of labels is the sum of the numbers of ways to form either in isolation.

This corresponds cleanly to the coproduct in Set, and we may write F+G for the species

(where the second + is the coproduct — i.e. disjoint union — of sets)

In the power series, we get .

Examples: We may define the species of all non-empty sets by

This kind of functional equations is where the theory of species starts to really *shine*.

A tricoloring of a set is a subdivision of the set into three disjoint subsets covering the original set. The number of tricolorings of size n is

A permutation fixes some set of points. The permutation restricted to the non-fixed points is a *derangement*. Total number of permutations on n elements is

In both of these cases, the total generating series is a product of the component series, and we end up defining

So .

Thus, tricolorings are and permutations are , where the set is that of fixed points, and the derangement captures the actual action of the permutation.

Endofunctions of sets decompose in their actions on the points as cycles or directed trees leading in to these cycles.

Since a collection of disjoint cycles corresponds to a permutation, we can consider such endofunctions to be permutations decorated with rooted trees attached to points of the permutations; or even permutations of rooted trees.

To form such a structure on a set U, we’d first partition U into subsets, put the structure of a rooted tree on each subset, and then the structure of a permutation on the set of these subsets.

Thus, the number of such structures on n elements is

This corresponds to the power series , and we write, in general, for the species of F-structures of G-structures on subsets.

**Examples**:

A = X·E(A)

L = 1 + X·L

B = 1 + X·B·B

Picking out a single point in a structure on n points can be done in precisely n ways.

Thus the corresponding generating function will be

for f-structures with a single label distinguished.

Since we’re working with exponential power series, we may notice that

and thus that derivatives are shifts.

Furthermore,

so that the generating function for F-structures with a single distinguished are

In species, this process is called pointing.

In functional programming, Conor McBride related this construction to Huet’s Zipper datatypes.

As it turns out, many of the constructions for species make eminent sense outside the category . In fact, species in Hask are known to programming language researchers as *container datatypes* and the whole calculus translates relatively cleanly.

Functional equations translate to the standard new data type definitions in Haskell.

**Examples**:

L = 1 + X·L

data List a = Nil | Cons a (List a)

B = 1 + X·B·B

data BinaryTree a = Leaf | Node (BinaryTree a) a (BinaryTree a)

A = X·E(A)

data RootedTree a = Node a (Set (RootedTree a))

usually, we simulate the Set here by a List. If we need for our rooted trees to be planar, we can in fact impose a Linear Order structure instead, and get something like

Ap = X·L(Ap)

The species interpretation of corresponding to leaving a hole in the structure carries over cleanly, so that is the type of T-with-a-hole.

We can deal with in two different ways:

1.

DL = D(1+X·L) = D1 + D(X·L) = 0 + DX·L+X·DL

and thus

DL-X·DL = 1·L

so (1-X)·DL = L

and thus

DL = L·1/(1-X)

Now, from L=1+X·L follows by a similar cavalier use of subtraction and division — which, by the way, in species theory is captured by the idea of a *virtual species*, and dealt with relatively cleanly — that

L = 1+X·L

so

L-X·L = 1

and thus

(1-X)·L = 1

so

L = 1/(1-X)

Thus, we can conclude that

DL = L·1/(1-X) = L·L

and thus, a list with a hole is a pair of lists: the stuff before the hole and the stuff after the hole.

2.

We could, instead of using implicit differentiation, as in 1, attack the derivation we had of

L = 1/(1-X) = 1+X+X·X+X·X·X+…

Indeed,

DL = D(1/(1-X)) = D(1+X+X·X+…) = 0+1+2X+3X·X+…

which we can observe factors as

= (1+X+X·X+X·X·X+…)·(1+X+X·X+X·X·X+…) = L·L

Or we can just use the division rule

DL = D(1/(1-X)) = D(1/u)·D(1-X) = -1/(u·u)·(-1) = 1/[(1-X)·(1-X)] = L·L

All of this becomes relevant to the implementation of Buchberger’s algorithm on shuffle operads (see Dotsenko—Khoroshkin and Dotsenko—Vejdemo-Johansson) in the step where the S-polynomials (and thereby also the reductions) are defined. With a common multiple defined, we need some way to extend the modifications that take the initial term to the common multiple to the rest of that term.

For this, it turns out, that the derivative of the tree datatype used provides theoretical guarantees that only partially filled in trees of the right size, with holes the right size, can be introduced; and also provides an easy and relatively efficient algorithm for contructing the hole-y trees and later filling in the holes.

]]>25 August, 1pm, Linköpings Universitet. The topology of Politics.

1 September, 10am, KTH. Combinatorial species, Haskell datatypes and Gröbner bases for operads.

1 September, 1pm, KTH. Topological data analysis and the topology of politics.

2, 3, 6, 7 September, 11am, KTH. Mini-course on Applied algebraic topology.

I’m happy to give out details if you’d like to come and listen.

]]>1. I’m going on the job market in the fall. I’m looking for lectureships, tenure tracks, possibly 2 year postdocs if they are really interesting.

2. I’m very interested in adding visits, adding seminars, adding anything interesting to this itinerary. If you want to meet me, send me a note, and I’m sure we can find time. My email adress is easy to google, and listed in the site info here.

I have current research and survey talks mostly ready to go for:

- Barcodes and the topological analysis of data: an overview
- In the past decade, the use of topology in explicit applications has become a growing field of research. One fruitful approach has been to view a dataset as a point cloud: a finite (but large) subset of a Euclidean space, and construct filtered simplicial complexes that capture distances between the data points. Doing this, we can translate any topological functor into a functor that acts on these filtered complexes, and reinterpret the results from the functors into information about the dataset. I’ll illustrate the basic results from the field, and give an overview of where my own research fits into the emergent paradigm.
- Persistent cohomology and circular coordinates
- Using a new variety of the algorithms described by Zomorodian, we are able to compute cohomology persistently, use persistence to pick out topologically relevant cocycles, and convert these into circle-valued coordinates. This allows us to generate topological coordinatizations for datasets, opening up for new approaches to data analysis.
- Dualities in persistence
- Starting out with a point cloud, the use of the barcode of persistent Betti numbers to characterize properties of the point cloud is well known. Expanding the amount of information we extract, we are led to study persistent homology and cohomology, in both an absolute manner, studying H(X
_{i}), and in a relative manner, studying H(X_{∞}; X_{i}). We are able to show that these four possible homology functors are related by dualization functors Hom_{k}(-, k) and Hom_{k[x]}(-, k[x]), and are able to use this to translate between all four corners. This way, we can choose to compute our barcodes in whatever situation an application motivates, and translate the resulting the barcode into whatever situation we want to interpret. - Period recognition with circular coordinates
- In work joint with Vin de Silva and Dmitriy Morozov on computing circular coordinates for datasets may be used in the analysis of dynamical systems and in signal processing. We have experimental results that show a high resistance to spatial noise in reconstructing the period of a periodic process while avoiding Fourier transforms and running differences. Ideally, the use of algebraic topology in time series analysis for dynamical systems and signals will complement existing methods with more noise-robust components, and I discuss to some extent how this may be achieved.
- Implementing Gröbner Bases for Operads
- In a paper by Dotsenko and Khoroshkin, a Buchberger algorithm is defined in an equivalent subcategory of the category of symmetric operads. For this subcategory, they prove a Diamond lemma, and demonstrate how the existence of a quadratic Gröbner basis amounts to a demonstration of Koszularity of the corresponding finitely presented operad. During a conference at CIRM in Luminy, me and Dotsenko built an implementation of their work. I’ll discuss the implementation work, and what considerations need to be taken in the implementation of Gröbner bases for operads.
- Parallelizing multigraded Gröbner bases
- In work together with Emil Sköldberg and Jason Dusek, we use the lattice of degrees for a multigraded polynomial ring to parallelize Gröbner basis computations in the multigraded ring. We show speedups in implementations in Haskell using Data Parallel Haskell and the vector package, as well as in Sage using MPI for parallelization and SQL for abstract data storage and transport.
- The topology of politics
- While the use of data analysis in political science is a mature field, the development of new data analysis methods calls for updates in the choice, use and interpretation of these methods. We set out to investigate the use of the topological data analysis methods worked out at Stanford on political datasets in an ongoing research project. I’ll illustrate initial results, partly well-known, on the geometry and topology of parliamentary rollcall datasets.
- Stepwise computing of diagonals for multiplicative families of polytopes
- In recent work together with Ron Umble, we demonstrate a way to use basic linear algebra to compute cellular diagonal maps for families of polytope such that each face of each polytope in the family is given by products of other polytopes in the same family. Using the algorithm we describe, we are able to recover the Alexander-Whitney diagonal on simplices, the Serre diagonal on cubes as well as the Saneblidze-Umble diagonal on associahedra.

3. This summer, I will be present at :

ATMCS in Münster, Germany, June 21-26

Topology and its Applications in Nafpaktos, Greece, June 26-30

The International Conference

“GEOMETRY, TOPOLOGY,

ALGEBRA and NUMBER THEORY, APPLICATIONS”

dedicated to the 120th anniversary of Boris Delone in Moscow, Russia, August 16-20

Symposium on Implementation and Application of Functional Languages near Amsterdam, Holland, September 1-3

International Congress on Mathematical Software in Kobe, Japan, September 13-17

First, Second, …

1st, 2nd, …

And, because this chart is kinda tricky to read, here’s the log-scaled version of the same chart:

For the log-chart, I stopped stacking the numbers.

*ETA:* Changed the log-chart from a line-chart to a bar-chart after feedback from the readership of bOINGbOING. Hello and welcome!