I’ve been talking with some topologists lately, and seen interesting constructions. I think these may potentially have some use in understanding Haskell or monads.

**Simplicial objects**

A simplicial object in a category is a collection of objects $C_n$ together with n maps $d_j:C_n\to C_{n-1}$ and n maps $s_j:C_n\to C_{n+1}$ subject to a particular collection of axioms.

The axioms are modeled on the prototypical simplicial structure: if $C_n$ is taken to be a (weakly) increasing list of integers, $d_j$ skips the $j$th entry, and $s_j$ repeats the $j$th entry. For the exact relations, I refer you to Wikipedia.

**Classifying spaces**

Take a category C. We can build a simplicial object $C_*$ from C by the following construction:

$C_n$ is all sequences $c_0\xrightarrow{f_0}c_1\xrightarrow{f_1}\dots\xrightarrow{f_{n-1}}c_n$ of composable morphisms in $C$.

$d_j$ composes the $j$th and $j+1$st morphisms. $d_0$ drops the first morphism and $d_n$ drops the last morphism.

$s_j$ inserts the identity map at the $j$th spot.

I ran across this problem in a reddit side-bar job-ad, and was intrigued by the task (description paraphrased to decrease googleability):

Write a function

uint64_t bitsquares(uint64_t a, uint64_t b);

such that it return the number of integers in [a,b] that have a square number of bits set to 1. Your function should run in less than O(b-a).

I think I see how to do it in something like logarithmic time. Here’s how:

First off, we notice that we can list all the squares between 0 and 64: these are 0, 1, 9, 16, 25, 36, 49, and 64. The function I will propose will run through a binary tree of depth 64, shortcutting through branches whenever it can. In fact; changing implementation language completely, I wonder if I cannot even write it comprehensively in Haskell.

- November 12th, 2011
- 4:09 pm

So, today I saw this in my Twitter feed:

«Phil Harvey wants us to partition {1,…,16} into two sets of equal size so each subset has the same sum, sum of squares and sum of cubes.» — posted by @MathsJam and retweeted @haggismaths.

Sounds implausible was my first though. My second thought was that there aren’t actually ALL that many of those: we can actually test this.

So, here’s a short collection of python lines to _do_ that testing.

`import itertools`

allIdx = range(1,17)

sets = itertools.combinations(allIdx,8)

setpairs = [(list(s), [i for i in allIdx if i not in s]) for s in sets]

def f((s1,s2)): return (sum(s1)-sum(s2), sum(map(lambda n: n**2, s1))-sum(map(lambda n: n**2, s2)), sum(map(lambda n: n**3, s1))-sum(map(lambda n: n**3, s2)))

```
```

`goodsets = [ss for ss in setpairs if f(ss) == (0,0,0)]`

And back comes one single response (actually, two, because the comparison is symmetric).

- December 18th, 2010
- 2:05 am

*This is a typed up copy of my lecture notes from the combinatorics seminar at KTH, 2010-09-01. This is not a perfect copy of what was said at the seminar, rather a starting point from which the talk grew.*

*
**In some points, I’ve tried to fill in the most sketchy and un-articulated points with some simile of what I ended up actually saying.*

Combinatorial species started out as a theory to deal with enumerative combinatorics, by providing a toolset & calculus for formal power series. (see Bergeron-Labelle-Leroux and Joyal)

As it turns out, not only is species useful for manipulating generating functions, btu it provides this with a categorical approach that may be transplanted into other areas.

For the benefit of the entire audience, I shall introduce some definitions.

**Definition**: A *category* C is a collection of *objects* and *arrows* with each arrow assigned a *source* and *target* object, such that

- August 29th, 2009
- 7:19 am

Category theory, with an origin in algebra and topology, has found use in recent decades for computer science and logic applications. Possibly most clearly, this is seen in the design of the programming language Haskell – where the categorical paradigm suffuses the language design, and gives rise to several of the language constructs, most prominently the Monad.

In this course, we will teach category theory from first principles with an eye towards its applications to and correspondences with Haskell and the theory of functional programming. We expect students to previously or currently be taking CS242 and to have some level of mathematical maturity. We also expect students to have had contact with linear algebra and discrete mathematics in order to follow the motivating examples behind the theory expounded.

Wednesdays at 4.15.

Online notes will appear successively on the Haskell wiki on http://haskell.org/haskellwiki/User:Michiexile/MATH198

Dear blogosphere,

come this fall, I shall be teaching. My first lecture course, ever.

The subject shall be on introducing Category Theory from the bottom up, in a manner digestible for Computer Science Undergraduates who have seen Haskell and been left wanting more from that contact.

And thus comes my question to you all: what would you like to see in such a course? Is there any advice you want to give me on how to make the course awesome?

The obvious bits are obvious. I shall have to discuss categories, functors, (co)products, (co)limits, monads, monoids, adjoints, natural transformations, the Curry-Howard isomorphism, the Hom-Tensor adjunction, categorical interpretation of data types. And all of it with explicit reference to how all these things influence Haskell, as well as plenty of mathematical examples.

But what ideas can you give me to make this greater than I’d make it on my own?

I started fiddling around with R again, and ended up playing with a zipcode database.

So, first I downloaded the zipcode database at Mapping Hacks, and unpacked the zipfile in my working directory.

Then, I loaded the data into R

> zips <- read.table("zipcode.csv",sep=",",quote="\"",header=TRUE)

> names(zips)

[1] "zip" "city" "state" "latitude" "longitude"

[6] "timezone" "dst"

So, now I have an R frame containing a lot of US cities, their geographical coordinates, and their zip codes. So we can start playing with the plot command! After rooting around a bit, I ended up settling on the smallest footprint plot dot I could make R produce, by setting the option pch=20 in the plot options. Hence, I ended up with a command basically like this:

This post is to give you all a very swift and breakneck introduction to Gröbner bases; not trying to be a nice and soft introduction, but much more leading up to announcing the latest research output from me.

Recall how you would run the Gaussian algorithm on a matrix. You’d take the leftmost upmost non-zero entry, divide its row by its value, and then use that row to eliminate anything in the corresponding column.

Once we have the matrix on echelon form, we can then do lots of things with it. Importantly, we can use substitution using the leading terms to do equation system solving.

The starting point for the theory of Gröbner bases was that the same method could be used – with some modification – to produce something from a bunch of polynomials that ends up being as useful as a row-reduced echelon form.

- December 10th, 2008
- 8:50 am

Or actually, I haven’t quite yet.

But, out of a whim, I downloaded J and started to play with it while reading this set of neat notes on Functional Programming and J.

And … well … my reaction so far is kinda “Buh!? What the *** just happened there?”

The first example I ran across, tried to read, and finally managed to is the following:

+/ , 5 = q: >: i. 100

This snippet is supposed to tell us how many 0s are trailing 100!. To get at this, we first need to figure out what, exactly, is done here. First observation is that 100! is the product of all integers from 1 to 100. The second is that the number of 0s trailing this is the same as the lower of the orders of 2 and 5, respectively, dividing 100!. Which, in turn is the lower of the numbers of 2s and 5s occurring in the totality of all prime decompositions of all the integers from 1 to 100.

- August 23rd, 2008
- 4:38 pm

This is extremely early playing around. It touches on things I’m going to be working with in Stanford, but at this point, I’m not even up on toy level.

We’ll start by generating a dataset. Essentially, I’ll take the trefolium, sample points on the curve, and then perturb each point ever so slightly.

idx <- 1:2000

theta <- idx*2*pi/2000

a <- cos(3*theta)

x <- a*cos(theta)

y <- a*sin(theta)

xper <- rnorm(2000)

yper <- rnorm

xd <- x + xper/100

yd <- y + yper/100

cd <- cbind(xd,yd)

As a result, we get a dataset that looks like this:

So, let’s pick a sample from the dataset. What I’d really want to do now would be to do the witness complex construction, but I haven’t figured enough out about how R ticks to do quite that. So we’ll pick a sample and then build the 1-skeleton of the Rips-Vietoris complex using Euclidean distance between points. This means, we’ll draw a graph on the dataset with an edge between two sample points whenever they are within ε from each other.

First off, I’d ask your pardon for the lull in postings – this spring has been insane. It has been very much fun – traveling the world, talking about my research and meeting people I only knew electronically – and also very intense.

To break the lull, I thought I’d try to pick up what I did last summer: parallel computing on clusters. It’s been a bit of blog chatter about SAGE and how SAGE suddenly has transformed from a Really Good Idea to something that starts to corner out most other systems in usability and flexibility.

Matlab? SciPy bundled with SAGE and the Python integration seems to be at least as good, if not better.

Maple? Mathematica? Maxima? Singular? GAP? SAGE interfaces with all those that it doesn’t emulate.

- February 21st, 2008
- 1:15 am

I saw the Cerebrate solve the first Scripting Games challenge: Pairing off. And immediately thought “I can do that in Haskell too”.

So, here it is.

import Data.List
cards = [(1,7),(0,5),(3,7),(2,7),(2,13)]
countpairs [] = 0
countpairs [a] = 0
countpairs (a:as) = length . filter (((snd a)==) . snd) $ as
pairingOff = sum . map countpairs . tails

And that’s that. Alas, the actual competition only takes Perl, VBScript and PowerShell, so I won’t be submitting this.

- February 15th, 2008
- 7:59 pm

Brent Yorgey wrote a post on using category theory to formalize patch theory. In the middle of it, he talks about the need to commute a patch to the end of a patch series, in order to apply a patch undoing it. He suggests a necessary condition to do this is that, given patches P and Q, we need to be able to find patches Q’ and P’ such that PQ=Q’P’, and preferably such that Q’ and P’ capture some of the info in P and Q.

However, as such, this is not enough to solve the issue. For one thing, we can set Q’=P and P’=Q, and things are the way he asks for.

Now, I wonder whether we can solve this by using PROPs (or possibly di-operads or something like that). Let’s represent a document as a list of some sort of tokens. We’ll set the set of all lists of length , and we’ll set to denote operations that take a list of length n and returns a list of length m.

- December 13th, 2007
- 3:05 pm

Last week, the news hit the blogosphere that Google had released a beta API for generating graphs using a reasonably easy and transparent GET parametrisation.

Inspired by this, and inspired by my early playing around with Ruby on Rails, I decided to whack together a Rails plugin that takes care of building the Google Charts IMG tag using what I hope is reasonably easy to use syntax.

I have a test-site using random data up for playing around with it.

The test-site as such runs on Ruby on Rails. The controller does some parsing and setting up of relevant arrays, and primarily generates random data for plotting.

The view has the following source code:

<%= google_chart

(@data,

@alt_text,

@options) %>

<p><% form_tag "" do %>

<label for="options[type]">Type</label>

<%= select_tag "options[type]", options_for_select(@typeopts,@options["type"])

%>

<label for="options[title]">Title</label>

<%= text_field_tag "options[title]", @options["title"] %>

- December 9th, 2007
- 10:13 pm

So, there is this one big and neat framework called Rails, building on top of this one neat new programming language called Ruby.

And one of the things that makes Rails so Damn Neat is that if you only set things up the right way around, it guesses almost everything you need it to guess for you.

One of the ways it does this is by *pluralization*. Basically, the model `Foo`

has a model defined in `app/model/foo.rb`

and it accesses the database table `foos`

.

So, when talking a good friend through the basics, we created the table `persons`

and generated the model `Person`

. And promptly got an error from the framework.

It turns out that the pluralization of *person* is *people*. I wonder what else irregularities they built into the system. If I have a model called `Index`

, does Rails expect it to read from the database table `indexes`

or from `indices`

?

- September 26th, 2007
- 4:25 am

dynkin:~/magma> magma
Magma V2.14-D250907 Wed Sep 26 2007 13:19:51 on dynkin [Seed = 1]
Type ? for help. Type -D to quit.
Loading startup file "/home/mik/.magmarc"
> Attach("homotopy.m");
> Attach("assoc.m");
> Aoo := ConstructAooRecord(DihedralGroup(4),10);
> S := CohomologyRingQuotient(Aoo`R);
> CalculateHighProduct(Aoo,[x,y,x,y]);
z
> exit;
Total time: 203.039 seconds, Total memory usage: 146.18MB

And this is one major reason for the lack of updates recently.

*IMPORTANT: Note that the implementation herein is severely flawed. Do not use this.*

One subject I spent a lot of time thinking about this spring was taking tensor products of A_{∞}-algebras. This turns out to actually already being solved – having a very combinatorial and pretty neat solution.

Recall that we can describe ways to associate operations and homotopy of associators by a sequence of polyhedra K_{n}, n=2,3,.., called the associahedra. An A_{∞}-algebra can be defined as being a map from the *cellular chains on the Associahedra* to *n*-ary endomorphisms of a graded vector space.

If this was incomprehensible to you, no matter for this post. The essence is that by figuring out how to deal with these polyhedra, we can figure out how to deal with A_{∞}-algebras.

- February 9th, 2007
- 5:34 pm

Syntaxfree writes over at his blog about a silly little toy he wrote, using the PFP library, to generate random text.

Now, his text is unreadable. I mean, it’s even unpronounceable. Why? Because he’s looking at bigram distributions of *letter*.

Great, I thought, I’ll do him one better. Random text using bigram distributions on words must surely be a LOT better than random text using bigram distributions on letters. At least the words come out readable, and they may even come out in a decent order.

So I sat down with his code, and hacked, tweaked, and monadized it to this

module Test

where
import Probability

import Data.Char

import Control.Monad

filename="kjv.genesis"

bigram t = zip ws (tail ws) where

ws = (words . map toLower . filter (\x -> isAlpha x || isSpace x)) t

distro = uniform . bigram

- December 30th, 2006
- 7:31 pm

Inspired by other bloggers on Planet Haskell, I thought I’d just sit down and write a retrospection post, reviewing the past year – primarily from angles such as mathematics, computers and my generic life situation.

It divides neatly into two different sections: the months as a commercial programmer and the months as PhD student and academic careerist.

The year began still working for Teleca Systems, and with security consulting for Stockholm-based firms and frequent trips back home.

Then as the year went on and my PhD applications grew more and more, I started getting results. I got invited to Bonn for an interview with the Homology and Homotopy graduate school program – which was in the end turned down because I was more of a homological algebraist than a topologist. And the week after that, I was invited to Jena for an interview for a position doing PhD work on computational homological algebra. The interview went well, the potential advisor was nice (and a once-roleplaying gamer to sweeten the deal more) and I got the position just a few days later.

- October 28th, 2006
- 9:43 pm

In a recent post, pozorvlak reminded me of one of the reason it is important to have a good, obvious, and quick-to-write programming language around.

He, as I, is a mathematician, spending his time thinking, finding patterns, and trying to formulate (more or less) absolute proof that his patterns hold all the time, alternatively ways to demonstrate that they may not be universal.

In the post linked above, he starts by a neat little exercise, gets interested, and goes out to look at more examples. These show a very clear pattern, and after following this pattern quite some way out, he finally believes the pattern enough to start searching for a proof: which he also finds.

- October 18th, 2006
- 3:37 pm

This term, I’m listening to a lecture course on Computational Group Theory. As a good exercise, I plan to implement everything we talk about as Haskell functions as well.

The first lecture was mainly an introduction to the area, ending with a very na

- September 19th, 2006
- 5:55 pm

As we left off the last installment, we were just about capable to open up a window, and draw some basic things in it by giving coordinate lists to the command renderPrimitive. The programs we built suffered under a couple of very infringing and ugly restraints when we wrote them – for one, they weren’t really very modularized. The code would have been much clearer had we farmed out important subtasks on other modules. For another, we never even considered the fact that some manipulations would not necessarily be good to do on the entire picture.

## Some modules

To deal with the first problem, let’s break apart our program a little bit, forming several more or less independent code files linked together to form a whole.

First off, HelloWorld.hs – containing a very generic program skeleton. We will use our module Bindings to setup everything else we might need, and tie them to the callbacks.

- September 14th, 2006
- 5:28 pm

Since the content rate of haskell-related posts is going up, the feed of this blog will get added to Planet Haskell. Hi, Planet!

- September 14th, 2006
- 4:17 pm

After having failed following the googled tutorial in HOpenGL programming, I thought I’d write down the steps I actually can get to work in a tutorial-like fashion. It may be a good idea to read this in parallell to the tutorial linked, since Panitz actually brings a lot of good explanations, even though his syntax isn’t up to speed with the latest HOpenGL at all points.

## Hello World

First of all, we’ll want to load the OpenGL libraries, throw up a window, and generally get to grips with what needs to be done to get a program running at all.

import Graphics.Rendering.OpenGL

import Graphics.UI.GLUT

main = do

(progname, _) <- getArgsAndInitialize

createWindow "Hello World"

mainLoop

This code throws up a window, with a given title. Nothing more happens, though. This is the skeleton that we’ll be building on to. Save it to HelloWorld.hs and compile it by running

ghc -package GLUT HelloWorld.hs -o HelloWorld

The weekly reports have been dead for a while. Reason? The blog has been dead for a while.

## Hardware woes

The old computer running this website had some problem all of a sudden about 3 weeks ago. These problems appeared as a complete lockdown of the system – no response to anything. So my brother – with me on the other side of a telephone, tried to reboot the box; but couldn’t get it back up online again. He was headed out to a LARP anyway within hours – and so couldn’t really do much more about it.

Right.

End result? I joined forces with a good friend of mine; we split hardware costs for a slick new box – an Asus barebone box with a 64bit processor and a gig of RAM. It received the harddrive and network interface from the old box, and was with that good to go – only .. processor architecture changed; and so for optimal performance, it’d be a nice idea to actually use a new system install that took advantage of the extra available bitwidth.

- February 21st, 2006
- 11:06 am

Todays webbrowsing led me to John Baez finds in mathematical physics for week 226, which led me to snoop around John Baez homepage, which in turn led me to stumble across the Geometry of Computation school and conference in Marseilles right now.

This, in turn, leads to several different themes for me to discuss.

# Cryptographic hashes

In the weeks finds, John Baez comes up to speed with the cryptographic community on the broken state of SHA-1 and MD-5. Now, this is a drama that has been developing with quite some speed during the last 1-1½ years. It all began heating up seriously early 2005 when Wang, Yin and Yu presented a paper detailing a serious attack against SHA-1. Since then, more and more tangible evidence for the inadvisability of MD-5 and upcoming problems with SHA-1 have arrived – such as several example objects with different contents and identical MD-5 hashes: postscript documents (Letter of Recommendation and Access right granting), X.509 certificates et.c.

- February 10th, 2006
- 3:08 pm

Stumbling across the blog Epsilon-Delta with a brilliant article series on mathematics as key to effective programming. This in itself merits a closer look at the blog; which in turn merits it a place in my blogroll. Go and read it you too!