Michi's Bloghttp://blog.mikael.johanssons.org/2015-05-09T10:00:00+02:00Winding down this blog2015-05-09T10:00:00+02:00Michitag:blog.mikael.johanssons.org,2015-05-09:winding-down-this-blog.html<p>I haven't posted here in over a year. In addition, the Wordpress installation I had has been compromised.</p>
<p>I know people have been reading my posts: I am loath to just pull the content altogether from the web. But I really don't want to keep maintaining this in the face of an increasingly hostile ecosystem.</p>
<p>I'm converting my blog to a static site, generated by Pelican. I might write more, but right now it doesn't seem to be a priority in my life.</p>
<p>It's been a good 10 years.</p>
Classifying spaces and Haskell2014-05-15T03:27:00+02:00Michitag:blog.mikael.johanssons.org,2014-05-15:classifying-spaces-and-haskell.html<p>I've been talking with some topologists lately, and seen interesting
constructions. I think these may potentially have some use in
understanding Haskell or monads.</p>
<p><strong>Simplicial objects</strong></p>
<p>A simplicial object in a category is a collection of objects $C_n$
together with n maps $d_j:C_n\to C_{n-1}$ and n maps $s_j:C_n\to
C_{n+1}$ subject to a particular collection of axioms.</p>
<p>The axioms are modeled on the prototypical simplicial structure: if
$C_n$ is taken to be a (weakly) increasing list of integers, $d_j$
skips the $j$th entry, and $s_j$ repeats the $j$th entry. For the exact
relations, I refer you to
<a class="reference external" href="http://en.wikipedia.org/wiki/Simplicial_set#Face_and_degeneracy_maps">Wikipedia</a>.</p>
<p><strong>Classifying spaces</strong></p>
<p>Take a category C. We can build a simplicial object $C_*$ from C by
the following construction:</p>
<div class="line-block">
<div class="line">$C_n$ is all sequences
$c_0\xrightarrow{f_0}c_1\xrightarrow{f_1}\dots\xrightarrow{f_{n-1}}c_n$
of composable morphisms in $C$.</div>
<div class="line-block">
<div class="line">$d_j$ composes the $j$th and $j+1$st morphisms. $d_0$ drops the
first morphism and $d_n$ drops the last morphism.</div>
<div class="line">$s_j$ inserts the identity map at the $j$th spot.</div>
</div>
</div>
<p>We can build a topological space from a simplicial object: dimension by
dimension, we glue the product $\Delta^{n-1}\times C_n$ of an
$n-1$-dimensional simplex and the object $C_n$ to the lower dimensional
construction -- with the face maps dictating the gluing step.</p>
<p><strong>Classifying space of Hask</strong></p>
<p>Consider the category of Haskell types and functions. The classifying
space here will have an $n$-dimensional simplex for any sequence of
$n+1$ composable functions; glued to the lower-dimensional simplices by
composing adjacent functions.</p>
<p>I honestly am not entirely sure what structure, if any, can be found
here: this might model $\beta$-reductions; or it might model something
utterly trivial -- I wonder what, if anything, can be found here, but I
currently have no answers.</p>
<p><strong>Classifying space of a Monad</strong></p>
<p>We can do the same thing in the category of endofunctors. Recall that a
Monad is an endofunctor $T$ with natural transformations $b:T^2\to T$
and $u:1\to T$. Just like any monoid, there is a category structure
that fits in this framework. The simplicial object here is $T_*$ with
$T_n=T^(n+1)$. The face maps $d_j$ compose two adjacent applications
of $T$ using $b$. Degeneracy maps $s_j$ introduce a new $T$ by applying
$u$.</p>
<div class="line-block">
<div class="line">For an example, let's consider the Writer monad, for a monoid <tt class="docutils literal">w</tt>:</div>
<div class="line">`` newtype Writer w a = Writer { runWriter :: (a, w) } return a = (a, mone) bind ((a, w0), w1) = (a, w0 <cite>mappend</cite> w1)``</div>
</div>
<div class="line-block">
<div class="line">The endofunctor here is <tt class="docutils literal">Writer w :: * <span class="pre">-></span> *</tt>. Iterating this
endofunctor gets us <tt class="docutils literal">(Writer <span class="pre">w)^n</span> a = <span class="pre">(((...(a,w),w),w),w)...,w)</span></tt>. A
typical value would be something like</div>
<div class="line-block">
<div class="line">`` (((...(a,w0),w1),w2),...,wn) :: (Writer w)^n a``</div>
</div>
</div>
<p>We get a simplicial structure here by letting the $j$th face map apply
<tt class="docutils literal">mappend</tt> to <tt class="docutils literal">wj</tt> and <tt class="docutils literal">w(j+1)</tt>. The $j$th degeneracy map creates
an instance of <tt class="docutils literal">mone</tt> at the $j$th spot.</p>
<p>Let's get more concrete. Consider `` Writer Any Bool``. Recall that
<tt class="docutils literal">w0 `mappend` w1 = w0 `or` w1</tt>. For any value of of type <tt class="docutils literal"><span class="pre">x::a</span></tt>,
there are $2^n$ values in <tt class="docutils literal">(Writer <span class="pre">w)^n</span> a</tt>: one for each possible
sequence of input to the Writer monad. Each of these produces an
$n-1$-dimensional simplex, that glues to the lower dimensional simplices
by fusing neighboring points.</p>
<p>In <tt class="docutils literal">(Writer Any <span class="pre">Bool)^0</span></tt> there are two vertices: <tt class="docutils literal">(x,False)</tt> and
<tt class="docutils literal">(x,True)</tt>.</p>
<div class="line-block">
<div class="line">In <tt class="docutils literal">(Writer Any <span class="pre">Bool)^1</span></tt> there are four edges: <tt class="docutils literal">(x,True, True)</tt>,
<tt class="docutils literal">(x,True, False)</tt>, <tt class="docutils literal">(x,False, True)</tt> and <tt class="docutils literal">(x,False, False)</tt>.</div>
<div class="line-block">
<div class="line">These connect to the vertices like this:</div>
<div class="line"><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/uploads/2014/05/writer1-1.png"><img alt="writer1 (1)" src="http://blog.mikael.johanssons.org/wp-content/uploads/2014/05/writer1-1-300x138.png" /></a></div>
</div>
</div>
<div class="line-block">
<div class="line">In <tt class="docutils literal">(Writer Any <span class="pre">Bool)^2</span></tt> there are eight triangles. They glue as
follows -- each triangle gets its three edges in order:</div>
<div class="line-block">
<div class="line"><tt class="docutils literal">FFF</tt> to <tt class="docutils literal">FF, FF, FF</tt></div>
<div class="line"><tt class="docutils literal">FFT</tt> to <tt class="docutils literal">FT, FT, FT</tt></div>
<div class="line"><tt class="docutils literal">FTF</tt> to <tt class="docutils literal">TF, TF, FT</tt></div>
<div class="line"><tt class="docutils literal">FTT</tt> to <tt class="docutils literal">TT, TT, FT</tt></div>
<div class="line"><tt class="docutils literal">TFF</tt> to <tt class="docutils literal">FF, TF, TF</tt></div>
<div class="line"><tt class="docutils literal">TFT</tt> to <tt class="docutils literal">FT, TT, TT</tt></div>
<div class="line"><tt class="docutils literal">TTF</tt> to <tt class="docutils literal">TF, TF, TT</tt></div>
<div class="line"><tt class="docutils literal">TTT</tt> to <tt class="docutils literal">TT, TT, TT</tt></div>
</div>
</div>
<p>It continues like this as we add higher and higher dimensional faces.
Any history of $n$ writes to the monad produces an $n-1$-dimensional
object, that connects to lower dimensions by looking at how the history
could have been contracted by the monoid action at various points.</p>
<div class="line-block">
<div class="line">For the monad <tt class="docutils literal">Writer All Bool</tt>, the same vertices, edges,
triangles, et.c. show up, but they connect differently:</div>
<div class="line-block">
<div class="line"><tt class="docutils literal">FF</tt> to <tt class="docutils literal">F, F</tt></div>
<div class="line"><tt class="docutils literal">FT</tt> to <tt class="docutils literal">T, F</tt></div>
<div class="line"><tt class="docutils literal">TF</tt> to <tt class="docutils literal">F, F</tt></div>
<div class="line"><tt class="docutils literal">TT</tt> to <tt class="docutils literal">T, T</tt></div>
<div class="line"><tt class="docutils literal">FFF</tt> to <tt class="docutils literal">FF, FF, FF</tt></div>
<div class="line"><tt class="docutils literal">FFT</tt> to <tt class="docutils literal">FT, FT, FF</tt></div>
<div class="line"><tt class="docutils literal">FTF</tt> to <tt class="docutils literal">TF, FF, FF</tt></div>
<div class="line"><tt class="docutils literal">FTT</tt> to <tt class="docutils literal">TT, FT, FT</tt></div>
<div class="line"><tt class="docutils literal">TFF</tt> to <tt class="docutils literal">FF, FF, TF</tt></div>
<div class="line"><tt class="docutils literal">TFT</tt> to <tt class="docutils literal">FT, FT, TF</tt></div>
<div class="line"><tt class="docutils literal">TTF</tt> to <tt class="docutils literal">TF, TF, TF</tt></div>
<div class="line"><tt class="docutils literal">TTT</tt> to <tt class="docutils literal">TT, TT, TT</tt></div>
</div>
</div>
<p><strong>What can we use this for?</strong></p>
<p>And here, at last, is my reason for writing all this. This construction
gets us a topological space, where cells in the space (possibly even
points in the space) correspond to possible values of the iterated monad
datatype, and where connectivity in the space is driven by the monad.
This gives us a geometric, a topological, handle on the programming
language structure.</p>
<div class="line-block">
<div class="line">Is this good for anything?</div>
<div class="line-block">
<div class="line">Are there any sort of questions we might be able to approach with
this sort of perspective?</div>
<div class="line">I'll be thinking about it, but if anyone out there has any sort of
idea I'd love to hear from you.</div>
</div>
</div>
Topology of Politics — Popular topology lecture2012-09-21T13:39:00+02:00Michitag:blog.mikael.johanssons.org,2012-09-21:topology-of-politics-seminar-talk.html<p>I will be speaking on my research into topological data analysis for
political data sets at the University of Edinburgh, at 4:10pm on 15th
October, in Room 6206, James Clerk Maxwell Building.</p>
<div class="section" id="data-analysis-on-politics-data">
<h2>Data Analysis on politics data</h2>
<p>Data analysis has played a growing role in politics for many years now;
<a class="reference external" href="http://fivethirtyeight.blogs.nytimes.com">analyzing polling data to predict outcomes of
elections</a> is perhaps the
most well-known application.</p>
<p>A different approach that has gotten more and more traction lately is to
analyze the voting behaviour of elected representatives as a way to
understand the inner workings of parliaments, and to monitor the elected
representatives to make sure they behave as they once promised. Sites
like <a class="reference external" href="http://www.govtrack.us/">GovTrack</a> and
<a class="reference external" href="http://voteview.com/">VoteView</a> bring machine learning and data
analysis tools to the citizens, and illustrate and visualize the
groupings and behaviour in political administration.</p>
<p>One key step to make parliamentary data accessible for data analysis is
to recognize that the data set is inherently geometric; the sequence of
votes cast in a parliamentary session can be seen as coordinates for a
very high-dimensional vector -- say +1 for Yea, -1 for Nay, and 0
otherwise. This way, each member of parliament is represented by a
vector; and members who agree with each other will be close to each
other in the resulting vector space.</p>
<div class="line-block">
<div class="line">We can use dimension reduction techniques on these vectors to find
essential structures within parliaments. At a first approach, this
tends to uncover the party structure of parliament:</div>
<div class="line-block">
<div class="line"><img alt="US House of Representatives 2009" src="http://blog.mikael.johanssons.org/wp-content/uploads/2011/01/congressMembers2009.png" /></div>
<div class="line">This image shows the US House of Representatives, laid out in 2d by
PCA coordinates from their voting records. The span from Democrats
(blue) to Republicans (red) is clearly seen.</div>
</div>
</div>
<div class="line-block">
<div class="line"><img alt="UK House of Commons" src="http://blog.mikael.johanssons.org/wp-content/quiz3.png" /></div>
<div class="line-block">
<div class="line">This plot shows the UK House of Commons during the period 2001-2005,
again with 2d positions generated by a PCA dimension reduction on the
voting data. Colours are for political parties, Labor (red), Tory
(blue), and LibDem (yellow) the dominating parts, and smaller parties
in other colours.</div>
</div>
</div>
</div>
<div class="section" id="topological-data-analysis">
<h2>Topological Data Analysis</h2>
<p>Topological Data Analysis is a brand new field of research, where
algebraic and combinatorial topology is used to refine geometric
methods. Topology encodes «closeness» and provides a robust approach to
qualitative features in data sets.</p>
<p>In particular, for the political data, we are using a technique called
Mapper. This technique, invented in the Applied Topology group at
Stanford, produces easy to analyze topological models induced by point
clouds: the points are divided using measurement functions, and then
clustered into closely connected groups. Whenever groups overlap, they
are connected, producing a simplified model for the space suggested by
the data points.</p>
<p>Applying this on topological data, we can discover sub-groups in the US
House of Representatives that were not visible by a PCA dimension
reduction. Not only that, but we can measure political unrest by the
fragmentation of parliament into larger or smaller interest groups.</p>
<table border="1" class="docutils">
<colgroup>
<col width="33%" />
<col width="33%" />
<col width="33%" />
</colgroup>
<tbody valign="top">
<tr><td><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/uploads/2012/09/2009.png"><img alt="image2" src="http://blog.mikael.johanssons.org/wp-content/uploads/2012/09/2009.png" /></a></td>
<td><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/uploads/2012/09/2010.png"><img alt="image3" src="http://blog.mikael.johanssons.org/wp-content/uploads/2012/09/2010.png" /></a></td>
<td><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/uploads/2012/09/2011.png"><img alt="image4" src="http://blog.mikael.johanssons.org/wp-content/uploads/2012/09/2011.png" /></a></td>
</tr>
<tr><td>2009</td>
<td>2010</td>
<td>2011</td>
</tr>
</tbody>
</table>
<p>Mapper analyses of House of Representatives voting records, using <a class="reference external" href="http://ayasdi.com">Iris
from Ayasdi</a>.</p>
</div>
Jonas Thente och innumeracy2012-07-02T23:55:00+02:00Michitag:blog.mikael.johanssons.org,2012-07-02:jonas-thente-och-innumeracy.html<p><a class="reference external" href="http://www.dn.se/kultur-noje/kronikor/jonas-thente-matematik-som-huvudamne-tillhor-datiden--politikerna-bor-satsa-fram">Jonas Thente har tyckt till på
DN.</a>
Matematik är ett föråldrat skolämne, eftersom alla har miniräknare.</p>
<p>Jag har en historia jag hörde idag, som jag skulle vilja ta upp. En god
vän (och universitetslärare i matematik) till mig förklarade hur han gör
för att hantera fuskare på hans matematiktentor. Han sätter sig med
studenten, och föreslår att de går igenom alla förklaringarna för varför
det misstänkta fusket inte är fusk — och för varje ursäkt, så bedömer de
sannolikheten att just de kunde ha hänt, och räknar samman
sannolikheterna.</p>
<div class="line-block">
<div class="line">“Jag missade att vi inte fick ha miniräknare.”</div>
<div class="line-block">
<div class="line">“Ja, okej då. Ska vi säga 50/50 på den?”</div>
<div class="line">“Mmmmm.”</div>
</div>
</div>
<p>och så där fortsätter det. Till sist, efter lagom många ursäkter, så
räknar de ihop sannolikheten att allt verkligen hänt som studenten sagt;
med de sannolikheter studenten själv gått med på — och ger studenten
motsvarande andel av de poäng han annars hade fått. I utbyte slipper
studenten disciplinnämnd och FUSKAR-stämpel och kan ta enbart en körd
tenta som straff…</p>
<p>Varenda student som åker fast för fusk går med på det. Varenda en tycker
det låter som en bra deal när allt börjar.</p>
<p>Thente tycker inte man behöver någon siffer-intuition, eftersom vi
ändå alla har miniräknare. Gäller den åsikten fortfarande när vi
diskuterar skattesatser i Almedalen? Gäller den fortfarande när vi
försöker utröna om huruvida en farmor som laddat hem Pippi Långstrump
från en Torrent åt sitt barnbarn verkligen kostat Hollywood ett par
miljoner kronor? Eller är det möjligen så att matematiken fortfarande är
relevant eftersom den bidrar med färdighet att tänka på och hantera
numeriska storheter?</p>
Recursively counting numbers with fixed bit counts2012-05-13T01:04:00+02:00Michitag:blog.mikael.johanssons.org,2012-05-13:recursively-counting-numbers-with-fixed-bit-counts.html<p>I ran across this problem in a reddit side-bar job-ad, and was intrigued
by the task (description paraphrased to decrease googleability):</p>
<blockquote>
<div class="line-block">
<div class="line">Write a function</div>
<div class="line-block">
<div class="line">`` uint64_t bitsquares(uint64_t a, uint64_t b);``</div>
<div class="line">such that it return the number of integers in [a,b] that have a
square number of bits set to 1. Your function should run in less
than O(b-a).</div>
</div>
</div>
</blockquote>
<p>I think I see how to do it in something like logarithmic time. Here's
how:</p>
<p>First off, we notice that we can list all the squares between 0 and 64:
these are 0, 1, 9, 16, 25, 36, 49, and 64. The function I will propose
will run through a binary tree of depth 64, shortcutting through
branches whenever it can. In fact; changing implementation language
completely, I wonder if I cannot even write it comprehensively in
Haskell.</p>
<p>The key insight I had was that whenever you try to find the number of
numbers with a bitcount matching some element of some list within the
bounds of 0b0000...0000 and 0b000...01111...11, then it reduces to a
simple binomial coefficient -- n choose k gives the number of numbers
with k bits set among the n last. Furthermore, we can reduce the total
size of the problem by removing a matching prefix from the two numbers
we test from.</p>
<p>Hence, we trace how many bits off the top agree between the two numbers.
We count the set bits among these, subtract them from each
representative in the list of squares, giving us the counts we need to
hit in the remainder.</p>
<p>Write a' for a with the agreeing prefix removed, and similarly for b'.
Then the total count is the count for the reduced things from a' to
0b000...01...111 plus the count for the reduced things from 0 to b'. The
reduction count for b' needs to be 1 larger than the one for a' since in
one case, we are working with the prefix before the varying bit
increases, and in the other, we work with the prefix after the varying
bit increases -- the latter count is not <em>really</em> from 0 to b', but this
is a useful proxy for the count from 0b0000...010...000 to b' with the
additional high bit set.</p>
<div class="line-block">
<div class="line">In code, I managed to boil this down to:</div>
<div class="line-block">
<div class="line">`` import Data.Word import Data.Bits import Data.List (elemIndices)``</div>
</div>
</div>
<div class="line-block">
<div class="line">bitsquare :: Word64 -> Word64 -> Word64bitsquare a b = bitcountin a b
squares -- # integers in [a,b] with square # of 1</div>
<div class="line-block">
<div class="line">s</div>
</div>
</div>
<div class="line-block">
<div class="line">squares = [1,4,9,16,25,36,49,64] :: [Word64]</div>
<div class="line-block">
<div class="line">allones = [fromIntegral (2^k - 1) | k <- [1..64]]</div>
</div>
</div>
<div class="line-block">
<div class="line">choose n 0 = 1</div>
<div class="line-block">
<div class="line">choose 0 k = 0</div>
<div class="line">choose n k = (choose (n-1) (k-1)) * n `div` k</div>
</div>
</div>
<div class="line-block">
<div class="line">popCount :: Word64 -> Word64</div>
<div class="line-block">
<div class="line">popCount w = sum [1 | x <- [0..63], testBit w x]</div>
</div>
</div>
<div class="line-block">
<div class="line">-- # integers in [a,b] with 1-counts in counts</div>
<div class="line-block">
<div class="line">bitcountin :: Word64 -> Word64 -> [Word64] -> Word64</div>
<div class="line">bitcountin a b counts</div>
<div class="line">| a > b = 0</div>
<div class="line">| a == b = if popCount b `elem` counts then 1 else 0 | (a == 0)
&& (b `elem` allones) = sum [choose n k | n <- [popCount b], k <- c</div>
<div class="line">ounts]</div>
<div class="line">| otherwise = (bitcountin a' low [c-lobits | c <- counts, c>=
lobits]) +</div>
<div class="line">(bitcountin hi b' [c-hibits | c <- counts, c>= hibits])</div>
<div class="line">where</div>
<div class="line">agreements = [(testBit a n) == (testBit b n) | n <- [0..63]]</div>
<div class="line">agreeI = elemIndices False agreements</div>
<div class="line">prefixIndex = last agreeI</div>
<div class="line">prefixCount = sum [1 | x <- [prefixIndex..63], testBit a x]</div>
<div class="line">a' = a .&. (2^prefixIndex - 1)</div>
<div class="line">b' = b .&. (2^prefixIndex - 1)</div>
<div class="line">low = 2^prefixIndex - 1</div>
<div class="line">hi = 0</div>
<div class="line">lobits = prefixCount</div>
<div class="line">hibits = prefixCount+1</div>
<div class="line"><br /></div>
</div>
</div>
ATMCS 5 in Edinburgh2012-03-22T07:20:00+01:00Michitag:blog.mikael.johanssons.org,2012-03-22:atmcs-5-in-edinburgh.html<div class="section" id="atmcs-5-algebra-and-topology-methods-computing-and-science">
<h2>ATMCS 5 - Algebra and Topology, Methods, Computing, and Science</h2>
<div class="section" id="second-announcement">
<h3>Second Announcement</h3>
<div class="line-block">
<div class="line">This meeting will take place in the period July 2-6, at the ICMS in
Edinburgh, Scotland. The theme will be applications of topological
methods</div>
<div class="line-block">
<div class="line">in various domains. Invited speakers are</div>
</div>
</div>
<div class="line-block">
<div class="line">J.D. Boissonnat (INRIA Sophia Antipolis) (Confirmed)</div>
<div class="line-block">
<div class="line">R. Van de Weijgaert (Groningen) (Confirmed)</div>
<div class="line">N. Linial (Hebrew University, Jerusalem) (Confirmed)</div>
<div class="line">S. Weinberger (University of Chicago) (Confirmed)</div>
<div class="line">S. Smale (City University of Hong Kong) (Confirmed)</div>
<div class="line">H. Edelsbrunner (IST, Austria) (Confirmed)</div>
<div class="line">E. Goubault (Commissariat à l’énergie atomique, Paris) (Confirmed)</div>
<div class="line">S. Krishnan (University of Pennsylvania) (Confirmed)</div>
<div class="line">M. Kahle (The Ohio State University) (Confirmed)</div>
<div class="line">L. Guibas (Stanford University) (Confirmed)</div>
<div class="line">R. Macpherson (IAS Princeton) (Tentative)</div>
<div class="line">A. Szymczak (Colorado School of Mines) (Confirmed)</div>
<div class="line">P. Skraba/ M. Vejdemo-Johansson (Ljubljana/St. Andrews) (Confirmed)</div>
<div class="line">Y. Mileyko (Duke University) (Confirmed)</div>
<div class="line">D. Cohen (Louisiana State)</div>
<div class="line">V. de Silva (Confirmed)</div>
<div class="line">There will be opportunities for contributed talks. Titles and
abstracts should be send to Gunnar Carlsson at
<a class="reference external" href="mailto:gunnar@math.stanford.edu">gunnar@math.stanford.edu</a>.</div>
</div>
</div>
<p>The conference website is located at
<a class="reference external" href="http://www.icms.org.uk/workshops/atmcs5">http://www.icms.org.uk/workshops/atmcs5</a>. Those interested should
register there as soon as possible so that we can obtain an idea of the
number of participants.</p>
<p>For the organizing committee,</p>
<p>Gunnar Carlsson</p>
</div>
</div>
BibLaTeX — why haven't I used this earlier!?2012-03-07T17:01:00+01:00Michitag:blog.mikael.johanssons.org,2012-03-07:biblatex-%e2%80%94-why-havent-i-used-this-earlier.html<p>As any reader of this (now rather occasional) blog might have guessed by
now, I do quite a lot of writing in LaTeX. It comes with the territory —
I do mathematics research, so I write in LaTeX. I do quite a bit of
research, so I spend a lot of time writing up my results.</p>
<p>And I care about the tools I use. I care deeply about the way my
citations come out, and I have significant aesthetic opinions on the
matter. I like author-date citation styles, much better than the
horrible abbreviated alphabet soups so popular in mathematics styles,
and much more pleasant to read than [1,2,5-7] as seems to be a dominant
style in mathematical literature. If I see (Zomorodian, 2005) or even
better (Edelsbrunner-Letscher-Zomorodian, 2000) I'll know immediately
what the reference is about when I read in my own field — whereas a
citation of [3] forces me to leaf back to check what they mean.</p>
<p>I have spent a long time either fiddling with natbib, or giving up
entirely, or poking around various styles, never quite feeling satisfied
with the solutions I ran into. Until today I finally decided to try
BibLaTeX on a document I started writing.</p>
<p>And now I wonder why it took me so long to get here.</p>
<p>BibLaTeX is amazing, IMO. It is also currently VERY poorly documented in
the blogosphere. The package manual exists, sure, and once you digest it
is actually helpful. But when it comes to more tutorial-oriented
lightweight introductions, I came up just barely with a very sketchy
site from Cambridge that had a few really artificial examples.</p>
<p>So here goes: my own introduction to BibLaTeX.</p>
<div class="section" id="my-first-biblatex-document">
<h2>My first BibLaTeX document</h2>
<p>To use BibLaTeX, you can stick with the same BibTeX database you already
carefully crafted. However, the way you interact with it in your
document changes.</p>
<div class="line-block">
<div class="line">In your preamble, you'll want something like</div>
<div class="line">`` usepackage[citestyle=authoryear]{biblatex} addbibresource{mybibfile.bib}``</div>
</div>
<div class="line-block">
<div class="line">Then, where you would classically have had</div>
<div class="line-block">
<div class="line">`` bibliographystyle{abbrv} bibliography{mybibfile}``</div>
<div class="line">you instead use the command</div>
<div class="line">`` printbibliography``</div>
</div>
</div>
<p>So far, so good. The \cite command works just like expected, and
everything seems like we make a few random small changes in how and
where we write the bibliography-related components of our document. The
real power here comes when you start looking at the details of what
BibLaTeX provides, and how you can use it.</p>
<ol class="arabic">
<li><p class="first">All citation commands come in two versions. One of them will
capitalize the first letter if it isn't already so you can use them
at the beginning of a sentence. Thus, there is both \cite{} and
\Cite{}.</p>
</li>
<li><p class="first">Even more than that, there is a host of cite commands:</p>
<ul class="simple">
<li>\cite, \Cite: cite using the default settings of your current
citestyle.</li>
<li>\parencite, \Parencite: enclose citation in parenthesis (or
square brackets for numeric citation styles).</li>
<li>\footcite: put citation in a footnote.</li>
</ul>
<p>Some styles allow other interesting citation commands as well:</p>
<ul class="simple">
<li>\textcite, \Textcite: for use when you want to refer to a
citation in running text. Will print author names, and then a
citation slug in brackets.</li>
<li>\smartcite, \Smartcite: gives a footnote if used in the body of
the document, and a parencite if used in a footnote.</li>
<li>\cite*, \parencite*: only prints year (or title) and not
author names.</li>
</ul>
</li>
<li><p class="first">Speaking of styles, you'll set those by adjusting the options to the
biblatex package inclusion. citestyle=... influences how the inline
citations work, and has support for author-year, author-title, with
or without <em>ibid.</em> handling, with or without compactification for
many citations of the same author groups. Furthermore, there is a
nice and mature numeric citation style handling, with optional
compacting to [1-3,8] instead of [1,3,8,2], or expansion to
individual numbers with no sharing of brackets allowed. There is a
couple of different author-abbreviation styles: one that mimics the
[VeJ03] styles of the classic BibTeX styles, and one that compacts
the author slugs as much as possible. And there are styles
specifically for including full bibliography dumps at each citation
and for building reading lists.</p>
<p>Each such style you set will setup the styles for the bibliography at
the end on its own. You can override those choices by adding
bibstyle=... to the biblatex package options. These approximately
match the styles for citestyle=... with less fiddling with inline
appearances.</p>
</p><p>And finally, there is a lot of control over how the bibliography
entries are sorted. Whether to include specific fields in the BibTeX
file — including doi and url.</p>
</li>
</ol>
<p>To pick an appropriate style, you really, really should refer to the
<a class="reference external" href="http://anorien.csc.warwick.ac.uk/mirrors/CTAN/macros/latex/contrib/biblatex/doc/biblatex.pdf">package
manual</a>.
Go straight (using the ToC hyperlinks) to 3.3 Standard Styles in the
Users Guide section, and the style options are cleanly and clearly
described for you.</p>
</div>
<div class="section" id="so-how-does-it-turn-out-then">
<h2>So, how does it turn out then?</h2>
<p>The results, let me show them to you.</p>
<div class="line-block">
<div class="line">This is all from an extended abstract I'm writing for a conference
this summer. I used</div>
<div class="line">`` usepackage[citestyle=authoryear-icomp,doi=true,url=true]{biblatex} addbibresource{gts2012.bib}``</div>
<div class="line-block">
<div class="line">in the preamble, and both \cite and \parencite inline.</div>
</div>
</div>
<div class="line-block">
<div class="line">Inline citations came out like:</div>
<div class="line-block">
<div class="line"><img alt="biblatex-inline" src="https://img.skitch.com/20120307-pm74p6u6h3f1kr4bec9d4s4kq9.png" /></div>
</div>
</div>
<div class="line-block">
<div class="line">And the full reference lists came out like:</div>
<div class="line-block">
<div class="line"><img alt="biblatex-reflist" src="https://img.skitch.com/20120307-xj4tmfrtasjbi9a68p2yk8t1mm.png" /></div>
</div>
</div>
<p>I am noticing that depending on the quality of your database, there
might be redundancies in the DOI and URL-enabled references — but you
always have to do some cleanup of the references, and it can hardly be
blamed on the software at this stage.</p>
</div>
<div class="section" id="now-what">
<h2>Now what?</h2>
<p>Go and try it out yourself. Skim the <a class="reference external" href="http://anorien.csc.warwick.ac.uk/mirrors/CTAN/macros/latex/contrib/biblatex/doc/biblatex.pdf">package
manual</a>.
Start using it in a few documents yourself.</p>
<p>There are snags when you need to interface with memoir — the manual
talks about this. Essentially it boils down to biblatex replacing memoir
as far as bibliography styling goes. Other than that, so far, I am only
happy with discovering this piece of LaTeX authoring support.</p>
</div>
Collaborative tools2012-02-17T22:57:00+01:00Michitag:blog.mikael.johanssons.org,2012-02-17:collaborative-tools.html<p>I do quite a bit of collaboration. In fact, since after my PhD research,
I have written exactly one preprint that does NOT spring from a
collaboration. And there is quite a bit of technological support that
flows into a good collaboration of mine. Here are some of the tools I
uses and some of the thoughts I have on them.</p>
<div class="section" id="version-control">
<h2>Version control</h2>
<p>Since I work in mathematics (and, arguably, in the fringes of Theory
CS), everything I write is written in LaTeX. This is for one thing
<em>very</em> helpful, since it means that the actual texts we collaborate on
are plain text with markup. Eminently suitable for the toolkit provided
by the software community.</p>
<p>As such, I make it a fundamental point to ALWAYS use version control
software with my writing projects. Regardless of whether I do it for
just myself, or in a team with collaborator, everything is under version
control. If my collaborators do not yet know any such systems, I teach
them <em>`Mercurial <http://mercurial.selenic.com>`__</em>, my own personal
favourite. This has varied success — if my collaborators absolutely
refuse, I'll maintain the version control interactions myself, but I
have had some collaborations where my partner is enthusiastic enough to
subsequently teach everyone HE works with to do the same.</p>
<p>Lately, I have also started looking into <em>`Git <git-scm.com/>`__</em> as
well — mainly because of the existence of
<a class="reference external" href="http://github.com">github</a>, but also because of systems such as
<a class="reference external" href="https://github.com/sitaramc/gitolite">Gitolite</a>. These put a
web-layer and an easy to use admin layer on top of the source code
repository, making it easy to create or modify workgroups and remove
almost all the hassle of administering a server yourself to facilitate
the collaboration with these tools.</p>
</div>
<div class="section" id="communication-in-larger-groups">
<h2>Communication in larger groups</h2>
<p>As long as a collaboration is with one or two collaborators, I usually
do well enough just typing their names into my email client any time I
want to say anything, and I definitely make sure to create a new mailbox
folder any time a clear project materializes. However, some of the
things I work on materialize in larger groups forming the generic
collaboration, and subgroups crystalizing for any particular part.</p>
<p>For these cases, I find a mailing list to be invaluable. Preferably with
archiving, so you can send out ideas to the list, and rely both on
getting feedback on the — often early and unrefined — ideas, and a
searchable cache where you can go back and figure out what the ideas
were in the first place. Since I do a lot of my email maintenance on
Google anyway, the bigger projects I am in have gotten mailing lists
setup that way too — using the very pleasant Google Groups interface to
administer and handle the mailing lists.</p>
<p>Wikis, too, are a really nice and productive idea — I haven't done
anything with them myself for running collaborations, but my current
workgroup uses a wiki to discuss software design issues.</p>
<p>For quicker turn-around and lower formality than email, I have had very
good success with some of my collaborators chatting on Skype or on GTalk
about our problems. This requires everyone to be comfortable enough with
the text chat as a medium, but can be surprisingly productive.</p>
</div>
<div class="section" id="meetings-with-geographic-diversity">
<h2>Meetings with geographic diversity</h2>
<p>And then, every so often, you just have to meet up and talk.</p>
<p>Best, of course, is if you happen to get together geographically anyway
— so that you can occupy a seminar room and go wild on a blackboard,
hashing out any details you need to deal with. This strains everybody's
travel budgets, and eats time in masses, so it is not always feasible.</p>
<p>Next best thing is a video and voice chat, preferably with a whiteboard
or at least screen sharing functionality. Good thing is there are a few
of these around.</p>
<p><a class="reference external" href="http://skype.com">Skype</a> is a tool I've used a few times for this.
It has video, voice, and screen sharing in the later versions, and is
really easy to get up and running and to deal with. The biggest drawback
I see with Skype is that you'll have to pay to get video conferencing
and not just video chat, which is a genuine drawback as soon as people
are distributed on more than two sites. If there are only two places
involved, however, few things are as painless as kicking up a Skype
session.</p>
<p>The other tool I've used lately is Google+. A lot can be said about this
foray into the Social Web, but the part I want to highlight here is the
hangout feature. A G+ hangout is a video conference for up to 10
participants. Everyone (who chooses to) streams video, or their own
screen if they want to, and in the client you either roam to whoever
makes the most noise (hopefully the person speaking at the moment), or
you can lock your view to one of the feeds. This combined with tablet
software like Microsoft Journal, or even just a paint application is
enough to get a whiteboard up and running. I have yet to find a good
collaborative whiteboard (though Google Docs and their drawing app
probably does a good job), but as long as only one person needs to hold
the pen, this solution is marvelously smooth.</p>
<p>These are my collaboration tools. What are yours?</p>
</div>
A quick python hack for a mathematical puzzle2011-11-12T16:09:00+01:00Michitag:blog.mikael.johanssons.org,2011-11-12:a-quick-python-hack-for-a-mathematical-puzzle.html<p>So, today I saw this in my Twitter feed:</p>
<blockquote>
«Phil Harvey wants us to partition {1,…,16} into two sets of equal
size so each subset has the same sum, sum of squares and sum of
cubes.» -- posted by @MathsJam and retweeted @haggismaths.</blockquote>
<p>Sounds implausible was my first though. My second thought was that there
aren't actually ALL that many of those: we can actually test this.</p>
<p>So, here's a short collection of python lines to _do_ that testing.</p>
<p><tt class="docutils literal">import itertools allIdx = range(1,17) sets = itertools.combinations(allIdx,8) setpairs = [(list(s), [i for i in allIdx if i not in s]) for s in sets] def <span class="pre">f((s1,s2)):</span> return <span class="pre">(sum(s1)-sum(s2),</span> sum(map(lambda n: <span class="pre">n**2,</span> <span class="pre">s1))-sum(map(lambda</span> n: <span class="pre">n**2,</span> <span class="pre">s2)),</span> sum(map(lambda n: <span class="pre">n**3,</span> <span class="pre">s1))-sum(map(lambda</span> n: <span class="pre">n**3,</span> <span class="pre">s2)))</span></tt></p>
<div class="line-block">
<div class="line">goodsets = [ss for ss in setpairs if f(ss) == (0,0,0)]</div>
<div class="line"><br /></div>
<div class="line-block">
<div class="line">And back comes one single response (actually, two, because the
comparison is symmetric).</div>
</div>
</div>
<p><tt class="docutils literal"><span class="pre">[([1,</span> 4, 6, 7, 10, 11, 13, 16], [2, 3, 5, 8, 9, 12, 14, <span class="pre">15]),</span> ([2, 3, 5, 8, 9, 12, 14, 15], [1, 4, 6, 7, 10, 11, 13, <span class="pre">16])]</span></tt></p>
Some thoughts on fire photography2011-07-18T11:52:00+02:00Michitag:blog.mikael.johanssons.org,2011-07-18:some-thoughts-on-fire-photography.html<p>First off: I would like to apologize to the readers I still have that I
never get around to updating here. I am aware that I'm writing less than
once I was planning to.</p>
<p>Aaaaanyway.</p>
<p>I have grown very interested in photography, as those of you who have
seen my <a class="reference external" href="http://3x365.blogspot.com">photo-a-day blog</a> will have
noticed. With the growing interest, I also notice a growing awareness of
photography and of pictures, and I start thinking about how I would do
things.</p>
<p>Such as this last weekend. There was a wedding. There was a wedding
celebration. At one point we ended up down at a beach, watching a fire
show. A friend of mine was running out of space on his memory card, so
left his camera with me and ran to get the next card.</p>
<p>So I took some photos of the fire dancing (I haven't gotten access to
them myself just yet, they'll be posted on my photo blog when I get hold
of them). And I realized some things about how to photograph fire
dancers. I'll enumerate a few of me recent (though I am convinced, not
original) insights here. Worth noticing is that they are pretty much all
inherently contradictory, emphasizing the *craft* aspect of
photography.</p>
<p><a class="reference external" href="http://www.flickr.com/photos/michiexile/5088198369"><img alt="Light juggling" src="http://farm5.static.flickr.com/4083/5088198369_d7797f21cd.jpg" /></a></p>
<p>1. <strong>Darkness matters</strong>. Large parts of the beauty of fire photography
comes from actually seeing the fire, even from the fire being the main
actor in the image. Hence, it helps if the fire light doesn't have to
compete with ambient light. Pick a nice location, wait until late enough
in the evening that ambient light is dim if not dark.</p>
<p>2. <strong>Long exposures</strong>. Conveying the motion of the flames in a fire
dance gets easier and all the more impressive if each individual flame
becomes a bright, vivid streak of light, tracing out the curve of the
dance. You get this with long exposures. The longer the better: time
really does translate pretty much directly to vivid visuals here.</p>
<p>3. <strong>Sharp facial features</strong>. A dancer dancing blurs. This is kinda the
point of the item #2 above. Blurring the fire, and limbs, imbues a sense
of motion, a sense of action to the picture. However, blurring the face
removes the feeling of humanity more than anything else. Keeping the
dancer immobile giving them sharp, recognizable features while still
moving their extremities and the fire will make for a truly iconic fire
dancing picture.</p>
<div class="line-block">
<div class="line">4. <strong>Slow dances</strong>. Related to all of the above, a slow fire dance
will accomplish several things for us as photographers:</div>
<div class="line-block">
<div class="line">a. Dancing slowly means the fire moves slowly. If you've ever watched
a fire dancer, you'll notice that when the fire moves slowly, it burns
with a large, bright yellow flame, illuminating the area around it.
Instead, when moving fast, the fire flickers, burns in a dim hot blue,
and illuminates much less.</div>
<div class="line">b. Dancing slowly means the dancer moves less, helping to keep the
dancer sharp at the core.</div>
</div>
</div>
<p>After the performance this weekend, I talked to the performers, and I
might be given the chance of making a dedicated photo shoot with them
juggling fire. I am already making plans for the photo shoot: how to
stage it, what to do and how. Expose for about -1, maybe -2ev, against
the background, illuminating the juggler with their fire, but keeping
the background visible and interesting. Posing them — with the fire — on
a jetty, so that the fire dance is reflected in the water. And asking
them to try and keep their face stationary, while twirling fire, and
exposing at somewhere in the range of 1/2-3".</p>
<p>At least that's my current plan.</p>
Geometric realization of simplicial sets2011-03-01T21:27:00+01:00Michitag:blog.mikael.johanssons.org,2011-03-01:geometric-realization-of-simplicial-sets.html<p>This post is an expansion of all the details I did not have a good
feeling for when I started with for page 7 of Goerss-Jardine, where the
geometric realization of simplicial sets is introduced.</p>
<p>The construction works by constructing a few helpful categories, and
using their properties along the way. Especially after unpacking the
categorical results G-J rely on, there are quite a few categories
floating around. I shall try to be very explicit about which category is
which, and how they work.</p>
<p>As we recall, simplicial sets are contravariant functors from the
category [tex]\mathbf{\Delta}[/tex] of ordinal numbers to the category
of sets. We introduce the <em>simplex category</em>
[tex]\mathbf{\Delta}\downarrow X[/tex] of a simplicial set
[tex]X[/tex] with objects (simplices) given by maps
[tex]\sigma:\Delta^n\to X[/tex] and a map from [tex]\sigma[/tex] to
[tex]\tau[/tex] being given by a map [tex]f[/tex] in
[tex]\mathbf{\Delta}[/tex] such that [tex]\sigma = \tau f[/tex].</p>
<p>Interpreting each simplicial simplex as a simplicial set, this can be
thought of as the subcategory of the <em>slice category</em> over [tex]X[/tex]
spanned by maps from the simplicial simplices. Any map between
simplicial simplices is determined completely by a unique ordinal number
map that induces it.</p>
<div class="section" id="lemma-2-1">
<h2>Lemma 2.1</h2>
<div class="line-block">
<div class="line">The proof of the isomorphism</div>
<div class="line-block">
<div class="line">[tex]X \equiv \colim_{\Delta^n\to X} \Delta^n[/tex]</div>
<div class="line">taken over all simplices in the simplex category of [tex]X[/tex]
proceeds using the theorem that every functor from a small category to
sets is a colimit of representable functors.</div>
</div>
</div>
<p>To unpack this statement, we turn to, say, Awodey, where this is
Proposition 8.10.</p>
<div class="line-block">
<div class="line"><strong>Proposition 8.10</strong> (Steve Awodey, Category Theory)</div>
<div class="line-block">
<div class="line">For any small category [tex]\mathbf{C}[/tex], every object P in the
functor category [tex]Sets^{\mathbf{C}^{op}}[/tex] is a colimit of
representable functors [tex]\colim_{j\in J} yA_j[/tex] using the
Yoneda embedding [tex]y:\mathbf{C}\to Sets^{\mathbf{C}^{op}}[/tex].</div>
</div>
</div>
<p>Specifically, we can choose an index category [tex]J[/tex] and a functor
[tex]A: J\to \mathbf{C}[/tex] such that [tex]P \equiv \colim_{J}
y\circ A[/tex].</p>
<div class="line-block">
<div class="line"><strong>Proof</strong></div>
<div class="line-block">
<div class="line">We start by introducing the <em>category of elements</em> or <em>Grothendieck
category</em> of P. This will be our index category for the colimit, and
is often written [tex]\int_\mathbf{C} P[/tex]. This category has
objects [tex](x\in PC,C)[/tex], and arrows [tex]h: (x, C)\to
(x',C')[/tex] given by arrows [tex]h[/tex] in [tex]\mathbf{C}[/tex]
such that [tex]P(h)x = x'[/tex].</div>
</div>
</div>
<p>This is almost like a category of pointed sets, only that we also care
about the action of P while we're at it. Since [tex]\mathbf{C}[/tex] is
a small category, so is [tex]\int_\mathbf{C} P[/tex], and there is a
projection functor [tex]\pi: \int_\mathbf{C} P \to
\mathbf{C}[/tex] given by forgetting about the point, so sending
[tex](x, C)\to C[/tex] and preserving the arrow forming a morphism of
elements.</p>
<div class="line-block">
<div class="line">We also recall the Yoneda embedding in order to progress nicely here:</div>
<div class="line-block">
<div class="line">[tex]yC = \hom_{\textbf{C}(-,C)[/tex]. The Yoneda lemma tells us
that for contravariant [tex]F[/tex], [tex]\hom(yC, F) = FC[/tex],
naturally in [tex]F[/tex] and [tex]C[/tex], and works by assigning to
[tex]x\in FC[/tex] the natural transformation [tex]\theta_x: yC\to
F[/tex] which has components [tex](\theta_x)_D: \hom(D,C) \to
FD[/tex]. These are defined by their actions on actual maps [tex]D\to
C[/tex] and we set [tex](\theta_x)_D(h) = F(h)(x)[/tex].</div>
</div>
</div>
<p>In other words, a contravariant functor to sets corresponds to natural
transformations from the Yoneda embedding to the functor itself. An
element in the functor image gives rise to a particular such natural
transformation by using the element as a <em>test case</em>, and using the
functor to make the test runnable.</p>
<p>Now, returning to the proof of 8.10, we need to build a colimit that
will work as our colimit presentation of P. We will do this by using
[tex]\pi[/tex] and [tex]y[/tex]. Indeed, for an object
[tex](x,C)\in\int_C P[/tex], by the Yoneda lemma this corresponds to
some natural transformation [tex]x: yC\to P[/tex]. These natural
transformations form a subcategory of the slice category of
[tex]Sets^{\mathbf{C}^{op}}[/tex] over P by the naturality of the
Yoneda construction.</p>
<p>So we can build a cocone [tex]y\pi\to P[/tex] by taking the map from
[tex]y\pi(x,C)\to P[/tex] to be the natural transformation
[tex]x:yC\to P[/tex]. This is a colim because if we had some other
cocone [tex]y\pi\to Q[/tex] with components [tex]\theta_{(x,C)}:
yC\to Q[/tex], we can produce the unique natural transformation
[tex]P\to Q[/tex] by [tex]\theta_C: PC\to QC[/tex] defined by
[tex]\theta_C(x) = \theta_(x,C)[/tex] and recall that natural
transformations [tex]yC\to Q[/tex] are, by the Yoneda lemma, the same
as elements of [tex]QC[/tex].</p>
</div>
<div class="section" id="but-what-does-all-this-mean">
<h2>But what does all this mean!?</h2>
<p>This was probably all about as headache inducing as anything else to do
with Yoneda's lemma. It's a result that both in its proof and its
applications tends to climb the ladder of nested categorical constructs
pretty high.</p>
<p>So let's get back to the simplicial case, and tease out what this
implies for our reading. [tex]X[/tex] is a simplicial set, hence a
contravariant set-valued functor on [tex]\mathbf{\Delta}[/tex]. So
[tex]\mathbf{\Delta}[/tex] is our small category.</p>
<p>The category of elements, [tex]\int_{\mathbf{\Delta}} X[/tex] is the
category where objects are [tex](s, n)[/tex] for [tex]s\in X_n[/tex].
Morphisms track what happens to [tex]s[/tex] under faces and
degeneracies. Thus, specifically, by the arguments in example 1.7, we
can put [tex]\int_{\mathbf{\Delta}} X[/tex] in bijective
correspondence with the collection of all simplicial n-simplices of X,
in the sense that we associate to each [tex]s, n[/tex] the simplicial
map [tex]\iota_s: \Delta^n\to X[/tex] as defined on page 6.</p>
<p>Thus, the index category is clear. The category of elements for a
simplicial set really is “just” the category of simplices
[tex]\iota_s[/tex]. The representable functors are the [tex]yC[/tex]
for [tex](x,C)\in\int_{\mathbf{\Delta}} X[/tex], and so are all on
the shape [tex]\hom_{\mathbf{\Delta}}(-,n)[/tex]. But these are just
the collections of standard simplicial n-simplices.</p>
<p>So our simplicial set is the colimit of simplical n-simplices under the
conditions that they obey the face and degeneracy maps in the original
simplicial set.</p>
<p>Let's work this all through on an example. Consider the interval I,
given by non-degenerate simplices a, b, x, subject to [tex]d_0x =
a[/tex] and [tex]d_1x = b[/tex]. Maps from the simplicial n-simplex to
I can hit one of these three simplices, or any degeneracy of these.</p>
<div class="line-block">
<div class="line">There are two maps [tex]\Delta^0\to I[/tex], namely
[tex]\iota_0\mapsto a[/tex] and [tex]\iota_0\mapsto b[/tex].
Thus, we start the construction of our colimit by introducing cells
[tex]e^0_a[/tex] and [tex]e^0_b[/tex]. Any degeneracies are adsorbed
by the fact that a simplicial simplex is a simplicial map, hence
[tex]e^0_a: s_0\iota_0\mapsto s_0a[/tex] and so on.</div>
<div class="line-block">
<div class="line">There are three maps [tex]\Delta^1\to I[/tex], namely:</div>
<div class="line">[tex]\iota_1\mapsto s_0a[/tex]</div>
<div class="line">[tex]\iota_1\mapsto s_0b[/tex] and</div>
<div class="line">[tex]\iota_1\mapsto x[/tex]</div>
<div class="line">These three choices all come with boundary maps that have to be taken
into account in the colimit. Specifically, the two first maps here,
the ones that hit degenerate simplices, factor through
[tex]e^0_a[/tex] and [tex]e^0_b[/tex] respectively. There is a map
[tex]\Delta^1\to\Delta^0[/tex] (in the category
[tex]\mathbf{\Delta}[/tex]) given by [tex](0,1)\mapsto (0,0) =
s^0\iota_0[/tex]. This map corresponds to the map
[tex]s_0:\Delta^0\to\Delta^1[/tex] (as a map of simplicial
simplices), which embeds the simplex [tex]e^0_a[/tex] into
[tex]\Delta^1\to I[/tex]. Thus, this embedding along degeneracies is
a map in the index category of the colimit, and therefore leads to an
identification of any simplex with all its degeneracies in the
colimit. The non-degenerate case gives us a simplex [tex]e^1_x[/tex].</div>
<div class="line">Similarly, the colimit forces the identifications [tex]d_0e^1_x =
e^0_a[/tex] and [tex]d_1e^1_x = e^0_b[/tex], finishing the
description of [tex]X[/tex] as a colimit.</div>
</div>
</div>
<p>Now, we have a nice candidate for the realization of a single simplicial
n-simplex: the topological n-simplex defined in Example 1.1. This is
used to define the geometric realization of a simplicial set through
“simply” forcing the realization functor to be compatible with this
colimit structure. Thus, we define the geometric realization of a
simplicial set to be the colimit of the topological n-simplices over the
same index category that we used to display [tex]X[/tex] as a colimit.</p>
</div>
<div class="section" id="well-what-about-adjointness">
<h2>Well, what about adjointness?</h2>
<div class="line-block">
<div class="line">The first <em>really</em> nice property about realization is that it is left
adjoint to the singular functor. Proving this is a sequence in
abstract symbol manipulation and remembering what everything at each
step of the way actually means. Thus, say we have a simplicial set X
and a topological space Y. Adjointness means there is an isomorphism</div>
<div class="line-block">
<div class="line">[tex]\hom_{\mathbf{Top}}(|X|, Y) \equiv
\hom_{\mathbf{sSet}}(X, SY)[/tex]</div>
<div class="line">which is natural in both X and Y.</div>
</div>
</div>
<div class="line-block">
<div class="line">Now, let's consider the left hand side.</div>
<div class="line-block">
<div class="line">[tex]\hom_{\mathbf{Top}}(|X|, Y) \equiv
\hom_{\mathbf{Top}}(\colim_{\Delta^n\to X}|\Delta^n|,
Y)[/tex]</div>
<div class="line">by definition of the geometric realization.</div>
<div class="line">[tex]\hom_{\mathbf{Top}}(\colim_{\Delta^n\to X}|\Delta^n|,
Y) \equiv \lim_{\Delta^n\to
X}\hom_{\mathbf{Top}}(|\Delta^n|, Y)[/tex]</div>
<div class="line">because contravariant representable functors map colimits to limits.
This is Awodey 5.29, and sits very well with our intuition from sets,
vector spaces, and just about anywhere else.</div>
<div class="line">Now, remember that a map in
[tex]\hom_{\mathbf{Top}}(|\Delta^n|, Y)[/tex] is a simplex in
the singular set [tex]SY[/tex], and that as we saw above, a simplicial
set is the colimit of its simplices. Thus, we get</div>
<div class="line">[tex]\lim_{\Delta^n\to X}\hom_{\mathbf{Top}}(|\Delta^n|,
Y)\equiv\lim_{\Delta^n\to X}\hom_{\mathbf{sSet}}(\Delta^n,
SY)[/tex]</div>
<div class="line">and then finally, we can re-introduce the colimit inside the morphism
set</div>
<div class="line">[tex]\lim_{\Delta^n\to X}\hom_{\mathbf{S}}(\Delta^n, SY) =
\hom_{\mathbf{sSet}}(\colim_{\Delta^n\to X}\Delta^n, SY)[/tex]</div>
<div class="line">[tex] = \hom_{\mathbf{sSet}}(X,SY)[/tex]</div>
</div>
</div>
<p>Each step along the way is natural, which finishes the proof.</p>
</div>
The Topology of Politics2011-01-04T20:09:00+01:00Michitag:blog.mikael.johanssons.org,2011-01-04:the-topology-of-politics.html<p>This is a typed up copy of my lecture notes from the seminar at
Linköping, 2010-08-25. This is not a perfect copy of what was said at
the seminar, rather a starting point from which the talk grew.</p>
<p>In my workgroup at Stanford, we focus on <em>topological data analysis</em> —
trying to use topological tools to understand, classify and predict
data.</p>
<p>Topology gets appropriate for qualitative rather than quantitative
properties; since it deals with <em>closeness</em> and not <em>distance</em>; also
makes such approaches appropriate where distances exist, but are
ill-motivated.</p>
<p>These approaches have already been used successfully, for analyzing</p>
<ul class="simple">
<li>physiological properties in Diabetes patients</li>
<li>neural firing patterns in the visual cortex of Macaques</li>
<li>dense regions in [tex]\mathbb{R}^9[/tex] of 3x3 pixel patches from
natural (b/w) images</li>
<li>screening for CO<sub>2</sub> adsorbative materials</li>
</ul>
<p>In a project joint with Gunnar Carlsson (Stanford), Anders Sandberg
(Oxford) (and more collaborators), we act on the belief that political
data can be amenable to topological analyses.</p>
<div class="section" id="st-what-do-we-mean-by-political-data">
<h2>1st: What do we mean by political data?</h2>
<p>We currently look at 3 types of data:</p>
<ol class="arabic">
<li><dl class="first docutils">
<dt>Vote matrices from parliament:</dt>
<dd><p class="first last">each column is a member of parliament
each row is a rollcall
we codify votes numerically: +1/-1 for Yea/Nay</p>
</dd>
</dl>
</p>
<p><p>And then we can do data analysis either on the set of members of
parliament in the space of rollcalls, or on the set of rollcalls in
the space of members of parliament.</p>
</li>
<li><p class="first">Co-sponsorship graphs</p>
<p>Nodes are members of parliament.</p>
<p>A directed edge goes from each co-sponsor to the main sponsor of a
particular bill, for all bills.</p>
<p>For Swedish politics, parties end up being strongly connected
internally, with coalition mediators appearing in the data.</p>
</li>
<li><p class="first">Shared N-gram graphs</p>
<p>Some turns of phrase, some talking points, get introduced and then
re-used by other members of parliament.</p>
<p>We believe that an analysis of N-grams of parliamentary speeches may
well give a data analyst tools to capture memetic drift and spread
within partliament.</p>
</li>
</ol>
<p>The project is still at an early stage, and the great challenge for us
right now is to find value to add — in political science, the grand
entrance of classical data analysis was a few decades ago, and much of
what can be said about politics with classical data analysis tools has
already been said.</p>
</div>
<div class="section" id="nd-what-has-already-been-done">
<h2>2nd: What has already been done?</h2>
<p>Political science discovered data analysis in the 90s, and a flurry of
political science data analysis papers followed.</p>
<div class="line-block">
<div class="line">Thus, while illustrative, doing things like a PCA on political vote
data is not quite novel.</div>
<div class="line-block">
<div class="line"><img alt="PCA of British MPs in the space of rollcalls" src="http://blog.mikael.johanssons.org/wp-content/quiz3.png" /><img alt="US congress rollcalls in the space of representatives" src="http://blog.mikael.johanssons.org/wp-content/quizplot2.png" /></div>
</div>
</div>
<div class="line-block">
<div class="line">Doing these PCAs, though, shows us, for instance, that the US house &
senate are essentially linear,</div>
<div class="line-block">
<div class="line"><img alt="image2" src="http://blog.mikael.johanssons.org/wp-content/uploads/2011/01/congressMembers2009.png" /></div>
<div class="line">and that the votes point cloud sits on most of the boundary of a unit
square (above right): at least one party backs every bill that comes
to a vote, and the main difference between bills is in how many of the
other party join in too.</div>
</div>
</div>
<div class="line-block">
<div class="line">Already in this kind of analysis, we notice a difference over time —
the party line is a much stronger factor in 2009 than it was in 1990:</div>
<div class="line-block">
<div class="line"><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/uploads/2011/01/congressVotes1990.png"><img alt="image3" src="http://blog.mikael.johanssons.org/wp-content/uploads/2011/01/congressVotes1990.png" /></a><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/uploads/2011/01/congressVotes2009.png"><img alt="image4" src="http://blog.mikael.johanssons.org/wp-content/uploads/2011/01/congressVotes2009.png" /></a></div>
</div>
</div>
<div class="line-block">
<div class="line">We can perform similar analyses for the UK:</div>
<div class="line-block">
<div class="line"><img alt="PCA of British MPs in the space of rollcalls" src="http://blog.mikael.johanssons.org/wp-content/quiz3.png" /></div>
<div class="line">Here we may notice that the regional parties are pretty much included
with the ideologically closest major party. This is a phenomenon we're
going to revisit later on.</div>
</div>
</div>
<div class="line-block">
<div class="line">With Swedish vote data, we've been able to both locate the blocks as
essentially grouped under the first two coordinates of a PCA; with
lower coordinates seemingly issues-oriented — it's worth noticing
component 5 which separates out C and MP from the other parties, and
thus seems to be the axis of environmentalism in Swedish politics.</div>
<div class="line-block">
<div class="line"><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/uploads/2011/01/subcomponentsABC.png"><img alt="image5" src="http://blog.mikael.johanssons.org/wp-content/uploads/2011/01/subcomponentsABC.png" /></a></div>
<div class="line">image courtesy of Anders Sandberg</div>
</div>
</div>
</div>
<div class="section" id="rd-how-do-we-bring-in-topology">
<h2>3rd: How do we bring in topology?</h2>
<p>There are several techniques developed at Stanford for topological data
analysis. While my own research has been centered around <em>persistent
homology</em>, the one I want to present here is different.</p>
<p>Mapper was developed by Gurjeet Singh, Facundo Memoli and Gunnar
Carlsson. It builds on combining <em>nerves</em> with <em>parameter spaces</em>.</p>
<div class="line-block">
<div class="line"><strong>Definition:</strong></div>
<div class="line-block">
<div class="line">Suppose [tex]\{U_i\}_{i\in I}[/tex] is a family of open sets.
Then the <em>nerve</em> [tex]\mathcal{N}(\{U_i\})[/tex] is a simplicial
complex with vertices the index set I, and a k-simplex
[tex]\sigma=(i_0,\dots,i_k)[/tex] if [tex]\bigcap_{i\in\sigma}
U_i \neq\emptyset[/tex].</div>
</div>
</div>
<div class="line-block">
<div class="line"><strong>Lemma:</strong> [Nerve Lemma]</div>
<div class="line-block">
<div class="line">If [tex]\{U_i\}[/tex] covers a paracompact space X, and all finite
intersections are contractible, then X is homotopy equivalent to the
nerve of the covering.</div>
</div>
</div>
<p>Now, if X is a topological space with a coordinate function
[tex]X\xlongrightarrow{f}Z[/tex] and Z is (paracompact and) covered by
some family of [tex]U_i[/tex], then the collection of preimages covers
X.</p>
<p>So, by subdividing each preimage of a set in the covering of Z into its
connected components, we get a covering of X subdividing the cover
induced from Z.</p>
<p>Thus, if f is sufficiently wellbehaved and the covering is fine enough,
the nerve of these subdivided preimages is homotopy equivalent to X, and
we have found a triangulation (up to homotopy) of X.</p>
<div class="line-block">
<div class="line">This particular line of reasoning can be translated into a
statistical/data analytic setting. The main difference is that by
virtue of persistent topology, connected components correspond to
clusters, and so we may start with a point cloud, and then pick out
preimages of the cover of our target space, cluster these, and let
each cluster correspond to a point, and then introduce simplices
according to the intersections.</div>
<div class="line-block">
<div class="line">The process is illustrated in a <a class="reference external" href="/wp-content/uploads/2011/01/animation.pdf">PDF “animation”
here</a>.</div>
</div>
</div>
<div class="line-block">
<div class="line">This technique has been used already to recover the difference between
diabetes type I and II as lobes in the data, apparent in blue here:</div>
<div class="line-block">
<div class="line"><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/uploads/2011/01/diabetestwo.png"><img alt="image6" src="http://blog.mikael.johanssons.org/wp-content/uploads/2011/01/diabetestwo.png" /></a></div>
</div>
</div>
<p>We hope to be able to use these techniques to better understand the way
parliaments are structured internally.</p>
</div>
Species, derivatives of types and Gröbner bases for operads2010-12-18T02:05:00+01:00Michitag:blog.mikael.johanssons.org,2010-12-18:species-derivatives-of-types-and-grobner-bases-for-operads.html<p>This is a typed up copy of my lecture notes from the combinatorics
seminar at KTH, 2010-09-01. This is not a perfect copy of what was said
at the seminar, rather a starting point from which the talk grew.</p>
<div class="line-block">
<div class="line">In some points, I've tried to fill in the most sketchy and
un-articulated points with some simile of what I ended up actually
saying.</div>
<div class="line"><br /></div>
</div>
<p>Combinatorial species started out as a theory to deal with enumerative
combinatorics, by providing a toolset & calculus for formal power
series. (see Bergeron-Labelle-Leroux and Joyal)</p>
<p>As it turns out, not only is species useful for manipulating generating
functions, btu it provides this with a categorical approach that may be
transplanted into other areas.</p>
<p>For the benefit of the entire audience, I shall introduce some
definitions.</p>
<p><strong>Definition</strong>: A <em>category</em> C is a collection of <em>objects</em> and <em>arrows</em>
with each arrow assigned a <em>source</em> and <em>target</em> object, such that</p>
<ol class="arabic simple">
<li>Each object has its own <em>identity arrow</em> 1.</li>
<li>Chains of arrows are associatively <em>composable</em>, with 1 the identity
of this composition.</li>
</ol>
<p><strong>Examples</strong>: Sets, Finite sets, k-Vector spaces, left and right
R-Modules, Graphs, Groups, Abelian groups.</p>
<p>[tex]\mathbb B[/tex]: finite sets with only bijections as morphisms.</p>
<p><strong>Examples</strong>:</p>
<ul class="simple">
<li>Category of a monoid.</li>
<li>Category generated by a graph</li>
<li>Category of a group (groupoid version of the category of a monoid)</li>
<li>Category of a poset</li>
<li>Category of Haskell types and functions.</li>
</ul>
<p><strong>Definition</strong>: A <em>functor</em> F is a map of categories, in other words, a
pair of maps Fo on objects and Fa on arrows, such that the identity
arrow maps to identity arrows and compositions of maps map to
compositions of their images.</p>
<div class="line-block">
<div class="line"><strong>Examples</strong>:</div>
<div class="line-block">
<div class="line">Constant functor: sends every object to A, every map to 1.</div>
</div>
</div>
<p>Identity functor: sends objects and arrows to themselves.</p>
<p>Underlying set functor, free monoid/vector space/module/... functors.</p>
<div class="line-block">
<div class="line">We are now equipped to define Species:</div>
<div class="line-block">
<div class="line"><strong>Definition</strong>: A <em>species of structures</em> is a functor [tex]\mathbb
B\to\mathbb B[/tex].</div>
</div>
</div>
<p>The idea is a set gets mapped to the set of all structures labelled by
the elements in the original set. A bijection on labels maps to the
<em>transport of structures</em> along the relabeling.</p>
<div class="line-block">
<div class="line"><strong>Examples</strong>:</div>
<div class="line-block">
<div class="line">[tex]A[/tex]: rooted trees, labels on vertices</div>
<div class="line">[tex]G[/tex]: simple graphs, labels on vertices</div>
<div class="line">[tex]Gc[/tex]: connected simple graphs, labels on vertices.</div>
<div class="line">[tex]a[/tex]: trees, labels on vertices.</div>
<div class="line">[tex]D[/tex]: directed graphs, labels on vertices</div>
<div class="line">[tex]\wp[/tex]: subsets, [tex]\wp[U] = \{S: S\subseteq
U\}[/tex].</div>
<div class="line">[tex]End[/tex]: endofunctions</div>
<div class="line">[tex]Inv[/tex]: involutions</div>
<div class="line">[tex]S[/tex]: permutations</div>
<div class="line">[tex]C[/tex]: cycles</div>
<div class="line">[tex]L[/tex]: linear orders</div>
<div class="line">[tex]E[/tex]: sets: [tex]E[U] = \{U\}[/tex]</div>
<div class="line">[tex]e[/tex]: elements: [tex]e[U] = U[/tex]</div>
<div class="line">[tex]X[/tex]: singletons: [tex]X[U] = U[/tex] if [tex]|U|=1[/tex]
and [tex]X[U] = \emptyset[/tex] otherwise</div>
<div class="line">[tex]1[/tex]: empty set: [tex]1[\emptyset]=\{\emptyset\}[/tex],
[tex]1[U] = \emptyset[/tex] otherwise</div>
<div class="line">[tex]0[/tex]: empty species: [tex]0[U] = \emptyset[/tex].</div>
</div>
</div>
<p>In enumerative combinatorics, the power of species resides in the
association to each species a number of generating functions:</p>
<div class="line-block">
<div class="line">The generating series of a species F is the exponential formal power
series</div>
<div class="line-block">
<div class="line">[tex]F(x) = \sum_{n=0}^\infty |F[n]|\frac{x^n}{n!}[/tex]</div>
<div class="line">where we use the convention [tex][n] = \{1,2,\dots,n\}[/tex], and
[tex]F[n] = F[[n]][/tex].</div>
<div class="line">Thus:</div>
<div class="line">[tex]L(x) = \frac{1}{1-x}[/tex]</div>
<div class="line">[tex]S(x) = \frac{1}{1-x}[/tex]</div>
<div class="line">[tex]E(x) = e^x[/tex]</div>
<div class="line">[tex]e(x) = xe^x[/tex]</div>
<div class="line">[tex]\wp(x) = e^{2x}[/tex]</div>
<div class="line">[tex]X(x) = x[/tex]</div>
<div class="line">[tex]1(x) = 1[/tex]</div>
<div class="line">[tex]0(x) = 0[/tex]</div>
</div>
</div>
<p>There are a number other in use for combinatorics, but for my purposes,
this is the one I'll focus on.</p>
<div class="section" id="operations-on-species">
<h2>Operations on species</h2>
<p>The real power, though, emerges when we start combining species, and
carry over the combinations to actions on the corresponding power
series.</p>
<div class="section" id="addition">
<h3>Addition</h3>
<p>The number of ways to form either, say, a graph or a linear order on a
set of labels is the sum of the numbers of ways to form either in
isolation.</p>
<div class="line-block">
<div class="line">This corresponds cleanly to the coproduct in Set, and we may write F+G
for the species</div>
<div class="line-block">
<div class="line">[tex](F+G)[U] = F[U] + G[U][/tex]</div>
<div class="line">(where the second + is the coproduct — i.e. disjoint union — of sets)</div>
</div>
</div>
<p>In the power series, we get [tex](F+G)(x) = F(x) + G(x)[/tex].</p>
<div class="line-block">
<div class="line">Examples: We may define the species of all non-empty sets
[tex]E_+[/tex] by</div>
<div class="line-block">
<div class="line">[tex]E = 1 + E_+[/tex]</div>
<div class="line">This kind of functional equations is where the theory of species
starts to really <em>shine</em>.</div>
</div>
</div>
</div>
<div class="section" id="multiplication">
<h3>Multiplication</h3>
<div class="line-block">
<div class="line">A tricoloring of a set is a subdivision of the set into three disjoint
subsets covering the original set. The number of tricolorings of size
n is</div>
<div class="line-block">
<div class="line">[tex]\sum_{i+j+k=n} \#\text{sets of size i}\cdot\#\text{sets
of size j}\cdot\#\text{sets of size k}[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">A permutation fixes some set of points. The permutation restricted to
the non-fixed points is a <em>derangement</em>. Total number of permutations
on n elements is</div>
<div class="line-block">
<div class="line">[tex]\sum_{i+j=n}\#\text{sets of size
i}\cdot\#\text{derangements of size j}[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">In both of these cases, the total generating series is a product of
the component series, and we end up defining</div>
<div class="line-block">
<div class="line">[tex]F\cdot G[U] = \{(f,g): f\in F[U_1], g\in G[U_2], U_1\cap
U_2 = \emptyset, U_1\cup U_2 = U\}[/tex]</div>
<div class="line">So [tex]F\cdot G[U] = \sum_{U_1,U_2\text{ decompose }U}
F[U_1]\times G[U_2][/tex].</div>
</div>
</div>
<p>Thus, tricolorings are [tex]E\cdot E\cdot E[/tex] and permutations are
[tex]S = E\cdot Der[/tex], where the set is that of fixed points, and
the derangement captures the actual action of the permutation.</p>
</div>
<div class="section" id="composition">
<h3>Composition</h3>
<p>Endofunctions of sets decompose in their actions on the points as cycles
or directed trees leading in to these cycles.</p>
<p>Since a collection of disjoint cycles corresponds to a permutation, we
can consider such endofunctions to be permutations decorated with rooted
trees attached to points of the permutations; or even permutations of
rooted trees.</p>
<p>To form such a structure on a set U, we'd first partition U into
subsets, put the structure of a rooted tree on each subset, and then the
structure of a permutation on the set of these subsets.</p>
<div class="line-block">
<div class="line">Thus, the number of such structures on n elements is</div>
<div class="line-block">
<div class="line">[tex]\sum_{r\leq n}\sum_{\sum_{k=1}^n i_k = n}
\#\tex{permutations on [r]}\cdot\prod_{k=1}^r\#\text{rooted
trees on [i_k]}[/tex]</div>
</div>
</div>
<p>This corresponds to the power series [tex]S(A(x))[/tex], and we write,
in general, [tex]F\circ G[U][/tex] for the species of F-structures of
G-structures on subsets.</p>
<div class="line-block">
<div class="line"><strong>Examples</strong>:</div>
<div class="line-block">
<div class="line">A = X·E(A)</div>
<div class="line">L = 1 + X·L</div>
<div class="line">B = 1 + X·B·B</div>
</div>
</div>
</div>
<div class="section" id="pointing">
<h3>Pointing</h3>
<p>Picking out a single point in a structure on n points can be done in
precisely n ways.</p>
<div class="line-block">
<div class="line">Thus the corresponding generating function will be</div>
<div class="line-block">
<div class="line">[tex]\sum n\cdot f_n\cdot\frac{x^n}{n!}[/tex]</div>
<div class="line">for f-structures with a single label distinguished.</div>
</div>
</div>
<div class="line-block">
<div class="line">Since we're working with exponential power series, we may notice that</div>
<div class="line-block">
<div class="line">[tex]\frac{\partial}{\partial x} \sum f_n\frac{x^n}{n!} = \sum
f_{n+1}\frac{x^n}{n!}[/tex]</div>
<div class="line">and thus that derivatives are shifts.</div>
<div class="line">Furthermore, [tex]x\cdot\sum f_n\frac{x^n}{n!} = \sum
f_n\frac{x^{n+1}}{n!} = \sum f_{n-1}n\cdot{x^n}{n!}[/tex]</div>
<div class="line">so that the generating function for F-structures with a single
distinguished are</div>
<div class="line">[tex]F^\bullet(x) = x\cdot\frac{\partial}{\partial x}F(x)[/tex]</div>
</div>
</div>
<p>In species, this process is called pointing.</p>
<p>In functional programming, Conor McBride related this construction to
Huet's Zipper datatypes.</p>
<p>As it turns out, many of the constructions for species make eminent
sense outside the category [tex]\mathbb B[/tex]. In fact, species in
Hask are known to programming language researchers as <em>container
datatypes</em> and the whole calculus translates relatively cleanly.</p>
<p>Functional equations translate to the standard new data type definitions
in Haskell.</p>
<div class="line-block">
<div class="line"><strong>Examples</strong>:</div>
<div class="line-block">
<div class="line">L = 1 + X·L</div>
<div class="line">`` data List a = Nil | Cons a (List a)``</div>
</div>
</div>
<div class="line-block">
<div class="line">B = 1 + X·B·B</div>
<div class="line-block">
<div class="line">`` data BinaryTree a = Leaf | Node (BinaryTree a) a (BinaryTree a)``</div>
</div>
</div>
<div class="line-block">
<div class="line">A = X·E(A)</div>
<div class="line-block">
<div class="line">`` data RootedTree a = Node a (Set (RootedTree a))``</div>
<div class="line">usually, we simulate the Set here by a List. If we need for our
rooted trees to be planar, we can in fact impose a Linear Order
structure instead, and get something like</div>
<div class="line">Ap = X·L(Ap)</div>
</div>
</div>
<p>The species interpretation of [tex]\frac{\partial}{\partial x}[/tex]
corresponding to leaving a hole in the structure carries over cleanly,
so that [tex]\frac{\partial}{\partial x}T[/tex] is the type of
T-with-a-hole.</p>
</div>
</div>
<div class="section" id="two-derivatives-of-lists">
<h2>Two derivatives of lists</h2>
<p>We can deal with [tex]\frac{\partial}{\partial x}L[/tex] in two
different ways:</p>
<div class="line-block">
<div class="line">1.</div>
<div class="line-block">
<div class="line">DL = D(1+X·L) = D1 + D(X·L) = 0 + DX·L+X·DL</div>
<div class="line">and thus</div>
<div class="line">DL-X·DL = 1·L</div>
<div class="line">so (1-X)·DL = L</div>
<div class="line">and thus</div>
<div class="line">DL = L·1/(1-X)</div>
</div>
</div>
<div class="line-block">
<div class="line">Now, from L=1+X·L follows by a similar cavalier use of subtraction and
division — which, by the way, in species theory is captured by the
idea of a <em>virtual species</em>, and dealt with relatively cleanly — that</div>
<div class="line-block">
<div class="line">L = 1+X·L</div>
<div class="line">so</div>
<div class="line">L-X·L = 1</div>
<div class="line">and thus</div>
<div class="line">(1-X)·L = 1</div>
<div class="line">so</div>
<div class="line">L = 1/(1-X)</div>
</div>
</div>
<div class="line-block">
<div class="line">Thus, we can conclude that</div>
<div class="line-block">
<div class="line">DL = L·1/(1-X) = L·L</div>
<div class="line">and thus, a list with a hole is a pair of lists: the stuff before the
hole and the stuff after the hole.</div>
</div>
</div>
<div class="line-block">
<div class="line">2.</div>
<div class="line-block">
<div class="line">We could, instead of using implicit differentiation, as in 1, attack
the derivation we had of</div>
<div class="line">L = 1/(1-X) = 1+X+X·X+X·X·X+…</div>
</div>
</div>
<div class="line-block">
<div class="line">Indeed,</div>
<div class="line-block">
<div class="line">DL = D(1/(1-X)) = D(1+X+X·X+…) = 0+1+2X+3X·X+…</div>
<div class="line">which we can observe factors as</div>
<div class="line">= (1+X+X·X+X·X·X+…)·(1+X+X·X+X·X·X+…) = L·L</div>
</div>
</div>
<div class="line-block">
<div class="line">Or we can just use the division rule</div>
<div class="line-block">
<div class="line">DL = D(1/(1-X)) = D(1/u)·D(1-X) = -1/(u·u)·(-1) = 1/[(1-X)·(1-X)] =
L·L</div>
</div>
</div>
</div>
<div class="section" id="grobner-bases-for-operads">
<h2>Gröbner bases for operads</h2>
<p>All of this becomes relevant to the implementation of Buchberger's
algorithm on shuffle operads (see Dotsenko—Khoroshkin and
Dotsenko—Vejdemo-Johansson) in the step where the S-polynomials (and
thereby also the reductions) are defined. With a common multiple
defined, we need some way to extend the modifications that take the
initial term to the common multiple to the rest of that term.</p>
<p>For this, it turns out, that the derivative of the tree datatype used
provides theoretical guarantees that only partially filled in trees of
the right size, with holes the right size, can be introduced; and also
provides an easy and relatively efficient algorithm for contructing the
hole-y trees and later filling in the holes.</p>
</div>
Teaching and seminars up ahead2010-08-23T12:37:00+02:00Michitag:blog.mikael.johanssons.org,2010-08-23:teaching-and-seminars-up-ahead.html<p>I'll be talking quite a bit in the next couple of weeks; here's a “heads
up” for those who might want to come and listen.</p>
<p>25 August, 1pm, Linköpings Universitet. The topology of Politics.</p>
<p>1 September, 10am, KTH. Combinatorial species, Haskell datatypes and
Gröbner bases for operads.</p>
<p>1 September, 1pm, KTH. Topological data analysis and the topology of
politics.</p>
<p>2, 3, 6, 7 September, 11am, KTH. Mini-course on Applied algebraic
topology.</p>
<p>I'm happy to give out details if you'd like to come and listen.</p>
Itinerary for the Summer2010-04-26T04:38:00+02:00Michitag:blog.mikael.johanssons.org,2010-04-26:itinerary-for-the-summer.html<p>A few things that may interest people.</p>
<p>1. I'm going on the job market in the fall. I'm looking for
lectureships, tenure tracks, possibly 2 year postdocs if they are really
interesting.</p>
<div class="line-block">
<div class="line">2. I'm very interested in adding visits, adding seminars, adding
anything interesting to this itinerary. If you want to meet me, send
me a note, and I'm sure we can find time. My email adress is easy to
google, and listed in the site info here.</div>
<div class="line-block">
<div class="line">I have current research and survey talks mostly ready to go for:</div>
</div>
</div>
<dl class="docutils">
<dt>Barcodes and the topological analysis of data: an overview</dt>
<dd>In the past decade, the use of topology in explicit applications has
become a growing field of research. One fruitful approach has been
to view a dataset as a point cloud: a finite (but large) subset of a
Euclidean space, and construct filtered simplicial complexes that
capture distances between the data points. Doing this, we can
translate any topological functor into a functor that acts on these
filtered complexes, and reinterpret the results from the functors
into information about the dataset. I'll illustrate the basic
results from the field, and give an overview of where my own
research fits into the emergent paradigm.</dd>
<dt>Persistent cohomology and circular coordinates</dt>
<dd>Using a new variety of the algorithms described by Zomorodian, we
are able to compute cohomology persistently, use persistence to pick
out topologically relevant cocycles, and convert these into
circle-valued coordinates. This allows us to generate topological
coordinatizations for datasets, opening up for new approaches to
data analysis.</dd>
<dt>Dualities in persistence</dt>
<dd>Starting out with a point cloud, the use of the barcode of
persistent Betti numbers to characterize properties of the point
cloud is well known. Expanding the amount of information we extract,
we are led to study persistent homology and cohomology, in both an
absolute manner, studying H(X<sub>i</sub>), and in a relative manner,
studying H(X<sub>∞</sub>; X<sub>i</sub>). We are able to show that these
four possible homology functors are related by dualization functors
Hom<sub>k</sub>(-, k) and Hom<sub>k[x]</sub>(-, k[x]), and are able
to use this to translate between all four corners. This way, we can
choose to compute our barcodes in whatever situation an application
motivates, and translate the resulting the barcode into whatever
situation we want to interpret.</dd>
<dt>Period recognition with circular coordinates</dt>
<dd>In work joint with Vin de Silva and Dmitriy Morozov on computing
circular coordinates for datasets may be used in the analysis of
dynamical systems and in signal processing. We have experimental
results that show a high resistance to spatial noise in
reconstructing the period of a periodic process while avoiding
Fourier transforms and running differences. Ideally, the use of
algebraic topology in time series analysis for dynamical systems and
signals will complement existing methods with more noise-robust
components, and I discuss to some extent how this may be achieved.</dd>
<dt>Implementing Gröbner Bases for Operads</dt>
<dd>In a paper by Dotsenko and Khoroshkin, a Buchberger algorithm is
defined in an equivalent subcategory of the category of symmetric
operads. For this subcategory, they prove a Diamond lemma, and
demonstrate how the existence of a quadratic Gröbner basis amounts
to a demonstration of Koszularity of the corresponding finitely
presented operad. During a conference at CIRM in Luminy, me and
Dotsenko built an implementation of their work. I'll discuss the
implementation work, and what considerations need to be taken in the
implementation of Gröbner bases for operads.</dd>
<dt>Parallelizing multigraded Gröbner bases</dt>
<dd>In work together with Emil Sköldberg and Jason Dusek, we use the
lattice of degrees for a multigraded polynomial ring to parallelize
Gröbner basis computations in the multigraded ring. We show speedups
in implementations in Haskell using Data Parallel Haskell and the
vector package, as well as in Sage using MPI for parallelization and
SQL for abstract data storage and transport.</dd>
<dt>The topology of politics</dt>
<dd>While the use of data analysis in political science is a mature
field, the development of new data analysis methods calls for
updates in the choice, use and interpretation of these methods. We
set out to investigate the use of the topological data analysis
methods worked out at Stanford on political datasets in an ongoing
research project. I'll illustrate initial results, partly
well-known, on the geometry and topology of parliamentary rollcall
datasets.</dd>
<dt>Stepwise computing of diagonals for multiplicative families of polytopes</dt>
<dd>In recent work together with Ron Umble, we demonstrate a way to use
basic linear algebra to compute cellular diagonal maps for families
of polytope such that each face of each polytope in the family is
given by products of other polytopes in the same family. Using the
algorithm we describe, we are able to recover the Alexander-Whitney
diagonal on simplices, the Serre diagonal on cubes as well as the
Saneblidze-Umble diagonal on associahedra.</dd>
</dl>
<div class="line-block">
<div class="line">3. This summer, I will be present at :</div>
<div class="line-block">
<div class="line"><a class="reference external" href="http://comptop.stanford.edu/atmcs4">ATMCS</a> in Münster, Germany,
June 21-26</div>
<div class="line"><a class="reference external" href="http://nafpaktostopology.math.upatras.gr/">Topology and its
Applications</a> in
Nafpaktos, Greece, June 26-30</div>
<div class="line"><a class="reference external" href="http://delone120.mi.ras.ru/index.html">The International Conference
"GEOMETRY, TOPOLOGY,
ALGEBRA and NUMBER THEORY, APPLICATIONS"
dedicated to the 120th anniversary of Boris
Delone</a> in Moscow, Russia,
August 16-20</div>
<div class="line"><a class="reference external" href="http://www.cs.uu.nl/wiki/bin/view/IFL2010/WebHome">Symposium on Implementation and Application of Functional
Languages</a> near
Amsterdam, Holland, September 1-3</div>
<div class="line"><a class="reference external" href="http://www.math.kobe-u.ac.jp/icms2010/">International Congress on Mathematical
Software</a> in Kobe, Japan,
September 13-17</div>
</div>
</div>
Repeal the nth amendment2010-03-24T20:42:00+01:00Michitag:blog.mikael.johanssons.org,2010-03-24:repeal-the-nth-amendment.html<p>Inspired by <a class="reference external" href="http://nielsenhayden.com/makinglight/archives/012276.html">this
post</a> over
at Making Light, here, have a chart:</p>
<p><img alt="image0" src="http://chart.apis.google.com/chart?cht=bvs&chs=500x400&chd=t:686000,2450000,10,429000,298000,83300,39600,85400,5,203000,43300,6,72200,368000,58000,137000,93700,128000,28900,1,9,34300,9,1,8,5,0|221000,1020000,7540,130000,45000,34100,6,27700,4,238000,4,18700,71300,328000,24800,467000,974000,136000,187000,8,76600,363000,10300,10800,16200,15100,8&chds=0,3500000&chco=00aa33,0033aa&chbh=5,7,2&chxr=1,0,3500000,500000|0,1,27&chxs=1N*e,cc3333,10|0N*f0,cc3333,8&chxt=x,y&chtt=Google+hits+for+Repeal+nth+amendment&chdl=First+Second+...|1st+2nd+..." /></p>
<p>First, Second, ...</p>
<p>1st, 2nd, ...</p>
<p>And, because this chart is kinda tricky to read, here's the log-scaled
version of the same chart:</p>
<p><img alt="image1" src="http://chart.apis.google.com/chart?cht=bvg&chs=600x400&chd=t:5.836324122037575,6.3891660861371626,1.0004340774793186,5.632457302308139,5.4742162786498954,4.9206450535429767,4.5976952955958224,4.9314579215431564,0.69983772586724569,5.3074960593070291,4.6364879966523107,0.77887447200273952,4.8585372577212249,5.5658478304749979,4.7634280684412902,5.1367205988567326,4.9717396372372402,5.1072100035771237,4.4608979930314288,0.0043213737826425782,0.95472479097906293,4.5352942466592188,0.95472479097906293,0.0043213737826425782,0.90363251608423767,0.69983772586724569,-2.0|5.3443922933364441,6.0086001760197068,3.8773719218567688,5.1139433857141032,4.6532126102852178,4.5327545063515648,0.77887447200273952,4.4424799258494314,0.60314437262018228,5.3765769753041788,0.60314437262018228,4.2718418387794754,4.8530895907627283,5.5158738569523642,4.3944518559449239,5.6693168898657795,5.9885589613374908,5.1335389403036338,5.271841629760802,0.90363251608423767,4.8842288263290081,5.5599066370001475,4.0128376463500954,4.0334241576112841,4.2095152826255617,4.1789772349053136,0.90363251608423767&chf=c,ls,90,ffeeee,0.20,FFFFFF,0.80&chg=100,10,1,5&chds=-2,8&chco=00aa33,0033aa&chbh=2,1,5&chxr=1,-2,8,1|0,1,27,1&chxs=1N*f0,cc3333,10|0N*f0,cc3333,8&chxt=x,y&chtt=log10+of+Google+hits+for+Repeal+nth+amendment&chdl=First+Second+...|1st+2nd+..." /></p>
<p>For the log-chart, I stopped stacking the numbers.</p>
<p><em>ETA:</em> Changed the log-chart from a line-chart to a bar-chart after
feedback from the readership of <a class="reference external" href="http://boingboing.org">bOINGbOING</a>.
Hello and welcome!</p>
Another kind of sports reporting2010-02-23T20:08:00+01:00Michitag:blog.mikael.johanssons.org,2010-02-23:another-kind-of-sports-reporting.html<p>Inspired by John Allen Paulos, who just now tweeted</p>
<blockquote>
Obvious, but NBC hasn't said: Canada, Norway, Germany way ahead of
US in Olympic medals per capita. Many ways to rank: Cf. Arrow's
theorem.</blockquote>
<p>I decided to redo the medals list. Here, the number of medals per capita
among the top countries.</p>
<table border="1" class="docutils">
<colgroup>
<col width="20%" />
<col width="20%" />
<col width="20%" />
<col width="20%" />
<col width="20%" />
</colgroup>
<thead valign="bottom">
<tr><th class="head">Country</th>
<th class="head">Gold/capita</th>
<th class="head">Silver/capita</th>
<th class="head">Bronze/capita</th>
<th class="head">Total/capita</th>
</tr>
</thead>
<tbody valign="top">
<tr><td>Norway</td>
<td>1.23E-06</td>
<td>6.17E-07</td>
<td>1.03E-06</td>
<td>2.88E-06</td>
</tr>
<tr><td>Austria</td>
<td>3.58E-07</td>
<td>3.58E-07</td>
<td>3.58E-07</td>
<td>1.07E-06</td>
</tr>
<tr><td>Slovenia</td>
<td>0</td>
<td>4.87E-07</td>
<td>4.87E-07</td>
<td>9.74E-07</td>
</tr>
<tr><td>Switzerland</td>
<td>6.43E-07</td>
<td>0</td>
<td>2.57E-07</td>
<td>9.00E-07</td>
</tr>
<tr><td>Latvia</td>
<td>0</td>
<td>8.90E-07</td>
<td>0</td>
<td>8.90E-07</td>
</tr>
<tr><td>Sweden</td>
<td>3.21E-07</td>
<td>2.14E-07</td>
<td>2.14E-07</td>
<td>7.49E-07</td>
</tr>
<tr><td>Estonia</td>
<td>0</td>
<td>7.46E-07</td>
<td>0</td>
<td>7.46E-07</td>
</tr>
<tr><td>Slovakia</td>
<td>1.84E-07</td>
<td>1.84E-07</td>
<td>1.84E-07</td>
<td>5.53E-07</td>
</tr>
<tr><td>Croatia</td>
<td>0</td>
<td>2.25E-07</td>
<td>2.25E-07</td>
<td>4.51E-07</td>
</tr>
<tr><td>Netherlands</td>
<td>1.81E-07</td>
<td>6.03E-08</td>
<td>6.03E-08</td>
<td>3.01E-07</td>
</tr>
<tr><td>Canada</td>
<td>1.47E-07</td>
<td>1.18E-07</td>
<td>2.94E-08</td>
<td>2.94E-07</td>
</tr>
<tr><td>Czech republic</td>
<td>9.51E-08</td>
<td>0</td>
<td>1.90E-07</td>
<td>2.85E-07</td>
</tr>
<tr><td>Germany</td>
<td>8.56E-08</td>
<td>1.10E-07</td>
<td>6.12E-08</td>
<td>2.57E-07</td>
</tr>
<tr><td>Belarus</td>
<td>0</td>
<td>1.05E-07</td>
<td>1.05E-07</td>
<td>2.11E-07</td>
</tr>
<tr><td>Finland</td>
<td>0</td>
<td>1.87E-07</td>
<td>0</td>
<td>1.87E-07</td>
</tr>
<tr><td>Korea</td>
<td>8.04E-08</td>
<td>8.04E-08</td>
<td>2.01E-08</td>
<td>1.81E-07</td>
</tr>
<tr><td>France</td>
<td>3.06E-08</td>
<td>3.06E-08</td>
<td>6.11E-08</td>
<td>1.22E-07</td>
</tr>
<tr><td>Poland</td>
<td>0</td>
<td>7.87E-08</td>
<td>2.62E-08</td>
<td>1.05E-07</td>
</tr>
<tr><td>Australia</td>
<td>4.51E-08</td>
<td>4.51E-08</td>
<td>0</td>
<td>9.02E-08</td>
</tr>
<tr><td>USA</td>
<td>2.27E-08</td>
<td>2.59E-08</td>
<td>3.24E-08</td>
<td>8.10E-08</td>
</tr>
<tr><td>Russian
Federation</td>
<td>1.41E-08</td>
<td>2.11E-08</td>
<td>4.23E-08</td>
<td>7.75E-08</td>
</tr>
<tr><td>Italy</td>
<td>0</td>
<td>1.66E-08</td>
<td>4.98E-08</td>
<td>6.64E-08</td>
</tr>
<tr><td>Kazakhstan</td>
<td>0</td>
<td>6.34E-08</td>
<td>0</td>
<td>6.34E-08</td>
</tr>
<tr><td>Japan</td>
<td>0</td>
<td>7.85E-09</td>
<td>1.57E-08</td>
<td>2.35E-08</td>
</tr>
<tr><td>Great Britain</td>
<td>1.61E-08</td>
<td>0</td>
<td>0</td>
<td>1.61E-08</td>
</tr>
<tr><td>China</td>
<td>2.25E-09</td>
<td>7.49E-10</td>
<td>7.49E-10</td>
<td><p class="first">3.74E-09</p>
<div class="last"></tr></div></td>
</tr>
</tbody>
</table>
<p>(Data taken on February 23, from the current state of olympic medals
achieved at that date, and from the <a class="reference external" href="http://en.wikipedia.org/wiki/List_of_countries_by_population">Wikipedia page listing populations
of the nations of the
earth</a>,
taken the same date)</p>
Testing out the wplatex package2010-02-03T05:38:00+01:00Michitag:blog.mikael.johanssons.org,2010-02-03:testing-out-the-wplatex-package.html<p>Eric Finster, over at <a class="reference external" href="http://curiousreasoning.wordpress.com">Curious
Reasoning</a> has built a python
script to allow you to write Wordpress posts entirely in LaTeX , and
upload them. The script parses the LaTeX code and generates HTML that
expresses the same structure.</p>
<p>This, here, is me trying it out. With any luck, the appearance of a new
toy will get me back to actually blogging some more - it's been winding
down a bit much here lately.</p>
Coordinatization with hom complexes2009-12-07T22:55:00+01:00Michitag:blog.mikael.johanssons.org,2009-12-07:coordinatization-with-hom-complexes.html<p>These are notes from a talk given at the Stanford applied topology
seminar by Gunnar Carlsson from 9 Oct 2009. The main function of this
blog post is to get me an easily accessible point of access for the
ideas in that talk.</p>
<div class="section" id="coordinatization">
<h2>Coordinatization</h2>
<p>First off, a few words on what we mean by coordinatization: as in
algebraic geometry, we say that a coordinate function is some
[tex]X\to\mathbb R[/tex] or possibly some [tex]X\to\mathbb C[/tex],
with all the niceness properties we'd expect to see in the context we're
working.</p>
<p>A particularly good example is Principal Component Analysis which yields
a split linear automorphism on the ambient space that maximizes spread
of the data points in the initial coordinates.</p>
</div>
<div class="section" id="topological-coordinatization">
<h2>Topological coordinatization</h2>
<div class="line-block">
<div class="line">The core question we're working with right now is this:</div>
<div class="line-block">
<div class="line">Given a space (point cloud) X, and a (persistent) view of
[tex]H_*(X)[/tex], can we use some map [tex]H_*(X)\to
H_*(Y)[/tex] to generate a map [tex]X\to Y[/tex] inducing that map?</div>
</div>
</div>
<p>In topology there are ways around to compute spaces of maps directly
from available homological information. The key phrase here is the
<em>Eilenberg-Moore spectral sequence</em>, by which we compute
[tex]H^*(\Omega X)[/tex] using [tex]H^*(X)[/tex]. We note that
[tex]\Omega X = Top_*(S^1,X)[/tex] with the <em>compact open</em> topology.</p>
<div class="line-block">
<div class="line">We fix a field k of coefficients. We can find a spectral sequence that
computes</div>
<div class="line-block">
<div class="line">[tex]Tor_{H^*(X)}(k,k)' \Rightarrow H^*(\Omega X)[/tex]</div>
</div>
</div>
<p>Variations of this spectral sequence allow us to compute
[tex]H^*(Map(X,Y))[/tex]. A homotopy class of maps thus corresponds to
a cohomology element from [tex]H^*(F(X,Y))[/tex].</p>
</div>
<div class="section" id="monad-approximation">
<h2>Monad approximation</h2>
<div class="line-block">
<div class="line">Define</div>
<div class="line-block">
<div class="line">[tex]Sp^2(X) = X \times_{\mathbb Z/2} X[/tex] (unordered pairs)</div>
<div class="line">and similarily unordered tuples</div>
<div class="line">[tex]Sp^n(X) = X^n/\Sigma_n[/tex]</div>
<div class="line">we can get a map, for pointed spaces [tex](X,*)[/tex]</div>
<div class="line">[tex]Sp^n(X) \to Sp^{n+1}(X)[/tex]</div>
<div class="line">[tex](x_1,\dots,x_n)\mapsto(x_1,\dots,x_n,*)[/tex]</div>
</div>
</div>
<p>In the limit, we get the union [tex]Sp^\infty(X)[/tex].</p>
<div class="line-block">
<div class="line">The Dold-Thom theorem tells us:</div>
<div class="line-block">
<div class="line">[tex]\pi_i(Sp^\infty(X)) = H_i(X)[/tex]</div>
</div>
</div>
<p>Suppose we are really looking for maps [tex]Z\to Sp^\infty(X,*)
\supseteq X = Sp^1(X)[/tex].</p>
<div class="line-block">
<div class="line">We pick chain complexes [tex]C_*, D_*[/tex]. There is a hom chain
complex [tex]Hom(C_*,D_*)[/tex] consisting of graded maps
[tex]C_*\to D_*[/tex] with the homotopy/induced boundary</div>
<div class="line-block">
<div class="line">[tex]\Delta f = \partial_Df\pm f\partial_C[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">For this complex, we get</div>
<div class="line-block">
<div class="line">[tex]H_0(Hom(C_*,D_*))[/tex] is the family of chain homotopy
classes of chain maps [tex]C_*\to D_*[/tex].</div>
</div>
</div>
<p>Recall: [tex]f,g[/tex] are chain homotopic if there is a family
[tex]h_i: C_i\to D_{i+1}[/tex] such that [tex]f-g = \partial_D
h_i + h_{i-1}\partial_C[/tex].</p>
<div class="line-block">
<div class="line">Homotopic space maps yield homotopic chain complex maps. And thus</div>
<div class="line-block">
<div class="line">[tex]Top_*(Z,Sp^\infty(X)) = H_*(Hom(C_*,D_*),\Delta)[/tex]</div>
</div>
</div>
<p>How, though, do we get from [tex][Z,Sp^\infty(X)][/tex] to
[tex][Z,X][/tex]?</p>
</div>
<div class="section" id="cosimplicial-spaces">
<h2>Cosimplicial spaces</h2>
<div class="line-block">
<div class="line">A simplicial space is a contravariant [tex]\Delta\to X[/tex]. It has
a <em>total space</em> or <em>realization</em> given by</div>
<div class="line-block">
<div class="line">[tex]\coprod X_n\times\Delta[n]/\sim[/tex]</div>
</div>
</div>
<p>In the dual setting of cosimplicial spaces, the total space is a
subspace of
[tex]X^0\times(X^1)^{\Delta[1]}\times(X^2)^{\Delta[2]}\times(X^3)^{\Delta[3]}\times\dots[/tex].</p>
<div class="line-block">
<div class="line">There is a cosimplicial space</div>
<div class="line-block">
<div class="line">[tex]Sp^\infty(X) \leftrightarrow Sp^\infty Sp^\infty(X)
\leftrightarrow \dots[/tex]</div>
<div class="line">whose total space is [tex]X[/tex].</div>
</div>
</div>
<div class="line-block">
<div class="line">Furthermore, there is a spectral sequence starting at
[tex]H_*(Sp^\infty(X))[/tex] (well understood space!)</div>
<div class="line-block">
<div class="line">going through</div>
<div class="line">[tex]\pi_*(Top_*(Z,Sp^\infty Sp^\infty Sp^\infty X))
\leftrightarrow \pi_*(Top_*(Z,Sp^\infty Sp^\infty X))
\leftrightarrow \pi_*(Top_*(Z,Sp^\infty X))[/tex]</div>
<div class="line">which recovers [tex]\pi_*(Top_*(Z,X))[/tex] for us.</div>
</div>
</div>
<p>The mapping space [tex]Top_*(Z,X)[/tex] maps to
[tex]Top_*(Z,Sp^\infty X)[/tex], which sends
[tex]H_*(Hom(C_*Z,C_*X))[/tex] to chain maps. Differentials give
us conditions on chain maps to come from actual maps on the topological
level [tex]Z\to X[/tex].</p>
<p>Suppose now that [tex]X[/tex] is a simplicial complex, and
[tex]H_*X[/tex] is known, say isomorphic to [tex]H_*S^1[/tex].</p>
<p>We pick [tex]S^1[/tex] as a model space, and work in
[tex]Hom(C_*X,C_*S^1)[/tex]. We pick out conditions form the
spectral sequence above to pick out chain maps that might come from
topological maps.</p>
<p>Suppose we had some chain map
[tex]C_*X\overset{\varphi}{\to}C_*S^1[/tex] that sends
[tex]\sigma\mapsto\sum_{i=0}^n\eta_i[/tex].</p>
<p>We look for a chain homotopic map [tex]\hat\varphi[/tex] such that
[tex]\hat\varphi(\sigma)[/tex] is a simplex (this will be WAY too
optimistic, usually) or at least [tex]\hat\varphi(\sigma) =
\sum_{i=0}^n\eta_i[/tex] for small [tex]n[/tex] and
[tex]\eta_i[/tex] close together in graph distances.</p>
<p>Yi Ding: there are likely many simplicial maps [tex]X\to S^1[/tex]
inducing any given [tex]H_*X\to H_*S^1[/tex]. This makes
optimizations on quality of maps (as measured in smallness properties of
preimages of given simplices) troublesome, since we have a large search
space and are likely to get stuck in local minima.</p>
<p>The topological approach creates very many maps, but we just want ONE -
albeit with quality assurances. The big question is how to get there:</p>
<p>Shrinking the space? Homotopy collapses to reduce the number of
generators?</p>
<p>Ask for more side conditions? If so, which?</p>
<p>Minimize the size of preimages and kernels - smoothing the resulting
map?</p>
<p>Run with one condition until we reach a minimum, then change conditions
to get out of that particular sink, repeat until it all looks stable?</p>
<p>Thus we need to search for strategies to improve convergence to actually
good maps.</p>
<p>The questions we pose are similar to:</p>
<p>Jeff Ericksson, et al.: Given X, [tex]x\in H_n(X)[/tex], how do we
find a good representative [tex]\xi\in x[/tex]?</p>
<div class="line-block">
<div class="line">Finding good maps is really about finding good representatives in
[tex]Hom(C_*X,C_*Y)[/tex]. Given simplicial bases
[tex]\{\sigma_i\}\subseteq X, \{\eta_i\}\subseteq Y[/tex] we
can pick out a first basis</div>
<div class="line">[tex]\varphi_{\sigma_i\eta_j}(\sigma_k)=\delta_{ik}\eta_j[/tex]
(Kronecker [tex]\delta[/tex]) for [tex]Hom(C_*X,C_*Y[/tex].</div>
</div>
<p>Finally: many problems in data analysis call for analyzing contractible
spaces. So if we pick out the notion of a boundary, doing all of this
relative to a boundary would give us increased power to use the
resulting methods in data analysis.</p>
</div>
[MATH198] Lecture 10 (last lecture) posted2009-12-02T23:17:00+01:00Michitag:blog.mikael.johanssons.org,2009-12-02:math198-lecture-10-last-lecture-posted.html<p>Now up: <a class="reference external" href="http://haskell.org/haskellwiki/User:Michiexile/MATH198/Lecture_10">Lecture
10</a>
with the definition of a topos and a derivation of internal,
inutitionistic logic within a topos.</p>
[MATH198] Lecture 9 posted and lectured2009-11-19T19:44:00+01:00Michitag:blog.mikael.johanssons.org,2009-11-19:math198-lecture-9-posted-and-lectured.html<p><a class="reference external" href="http://haskell.org/haskellwiki/User:Michiexile/MATH198/Lecture_9">Lecture
9</a>:
Catamorphisms, Anamorphisms, more from that zoo; adjunctions, some
properties and some examples.</p>
[MATH198] Multiple lectures posted2009-11-11T20:31:00+01:00Michitag:blog.mikael.johanssons.org,2009-11-11:math198-multiple-lectures-posted.html<div class="line-block">
<div class="line">I have been remiss in updating here. Since the last time I posted, I
have posted:</div>
<div class="line-block">
<div class="line"><a class="reference external" href="http://haskell.org/haskellwiki/User:Michiexile/MATH198/Lecture_6">Lecture
6</a>,
featuring some interesting limits and colimits, culminating in the
introduction of adjoints.</div>
</div>
</div>
<p><a class="reference external" href="http://haskell.org/haskellwiki/User:Michiexile/MATH198/Lecture_7">Lecture
7</a>,
featuring the introduction of monads based in adjoints, with the
connection between the <em>monoid of endofunctors</em> and the Haskellite
specification of monads.</p>
<p><a class="reference external" href="http://haskell.org/haskellwiki/User:Michiexile/MATH198/Lecture_8">Lecture
8</a>,
featuring Eilenberg-Moore algebras, initial algebras for datatype
specification, Lambek's lemma and structural induction and recursion
with endofunctor algebras.</p>
[MATH198] Lecture 5 is up2009-10-23T18:07:00+02:00Michitag:blog.mikael.johanssons.org,2009-10-23:math198-lecture-5-is-up.html<p>And, as it turns out, my logic-fu is lacking. Next time around, it's
likely I talk about the CCC = typed λ-calculus correspondence, but won't
try to actually produce the correspondence explicitly.</p>
[MATH 198] Lecture 4 and a question for the community2009-10-15T06:56:00+02:00Michitag:blog.mikael.johanssons.org,2009-10-15:math-198-lecture-4-and-a-question-for-the-community.html<p>Lecture 4 was held, and the notes are up on the wiki: <a class="reference external" href="http://haskell.org/haskellwiki/User:Michiexile/MATH198/Lecture_4">Lecture 4
notes</a></p>
<p>During class, and in unrelated conversations afterwards, though, the
question emerged:</p>
<p>If Formally differentiating datatypes gives us zippers? What happens if
we formally integrate datatypes?</p>
[MATH198] Third lecture is up2009-10-09T08:09:00+02:00Michitag:blog.mikael.johanssons.org,2009-10-09:math198-third-lecture-is-up.html<p>The third lecture is up on the <a class="reference external" href="http://haskell.org/haskellwiki/User:Michiexile/MATH198/Lecture_3">haskell
wiki</a>.</p>
[MATH 198] Second lecture2009-10-05T21:25:00+02:00Michitag:blog.mikael.johanssons.org,2009-10-05:math-198-second-lecture.html<p>I've been maddeningly slow lately. With everything.</p>
<p>Since last week Wednesday, the <a class="reference external" href="http://haskell.org/haskellwiki/User:Michiexile/MATH198/Lecture_2">second
lecture</a>
is up on the Haskell wiki.</p>
[MATH198] Lecture 1 now online2009-09-24T21:18:00+02:00Michitag:blog.mikael.johanssons.org,2009-09-24:math198-lecture-1-now-online.html<p>The first lecture has been successfully held. The notes - which may well
be augmented once I get hold of the students' notes - are online on <a class="reference external" href="http://haskell.org/haskellwiki/User:Michiexile/MATH198">the
Haskell Wiki</a></p>
[Stanford] MATH 198: Category Theory and Functional Programming2009-08-29T07:19:00+02:00Michitag:blog.mikael.johanssons.org,2009-08-29:stanford-math-198-category-theory-and-functional-programming.html<p>Category theory, with an origin in algebra and topology, has found use
in recent decades for computer science and logic applications. Possibly
most clearly, this is seen in the design of the programming language
Haskell - where the categorical paradigm suffuses the language design,
and gives rise to several of the language constructs, most prominently
the Monad.</p>
<p>In this course, we will teach category theory from first principles with
an eye towards its applications to and correspondences with Haskell and
the theory of functional programming. We expect students to previously
or currently be taking CS242 and to have some level of mathematical
maturity. We also expect students to have had contact with linear
algebra and discrete mathematics in order to follow the motivating
examples behind the theory expounded.</p>
<p>Wednesdays at 4.15.</p>
<p>Online notes will appear successively on the Haskell wiki on
<a class="reference external" href="http://haskell.org/haskellwiki/User:Michiexile/MATH198">http://haskell.org/haskellwiki/User:Michiexile/MATH198</a></p>
Soliciting advice2009-07-22T15:28:00+02:00Michitag:blog.mikael.johanssons.org,2009-07-22:soliciting-advice.html<p>Dear blogosphere,</p>
<p>come this fall, I shall be teaching. My first lecture course, ever.</p>
<p>The subject shall be on introducing Category Theory from the bottom up,
in a manner digestible for Computer Science Undergraduates who have seen
Haskell and been left wanting more from that contact.</p>
<p>And thus comes my question to you all: what would you like to see in
such a course? Is there any advice you want to give me on how to make
the course awesome?</p>
<p>The obvious bits are obvious. I shall have to discuss categories,
functors, (co)products, (co)limits, monads, monoids, adjoints, natural
transformations, the Curry-Howard isomorphism, the Hom-Tensor
adjunction, categorical interpretation of data types. And all of it with
explicit reference to how all these things influence Haskell, as well as
plenty of mathematical examples.</p>
<p>But what ideas can you give me to make this greater than I'd make it on
my own?</p>
Guess the plots!2009-06-03T17:30:00+02:00Michitag:blog.mikael.johanssons.org,2009-06-03:guess-the-plots.html<p>What do these depict?</p>
<div class="line-block">
<div class="line"><img alt="image0" src="http://blog.mikael.johanssons.org/wp-content/quizplot1.png" /></div>
<div class="line-block">
<div class="line"><img alt="image1" src="http://blog.mikael.johanssons.org/wp-content/quizplot2.png" /></div>
</div>
</div>
<p>Here are two others. Different data source, different point in time, but
what <em>are</em> they?</p>
<div class="line-block">
<div class="line"><img alt="image2" src="http://blog.mikael.johanssons.org/wp-content/quiz3.png" /></div>
<div class="line-block">
<div class="line"><img alt="image3" src="http://blog.mikael.johanssons.org/wp-content/quiz4.png" /></div>
</div>
</div>
<p>They are all linked in pairs - one coloured and one black linked
together. They are not sports related. And they are taken from real
world data. The colours are relevant and constitute a hint in their own
right.</p>
Mapping zipcodes in R2009-05-13T15:51:00+02:00Michitag:blog.mikael.johanssons.org,2009-05-13:mapping-zipcodes-in-r.html<p>I started fiddling around with R again, and ended up playing with a
zipcode database.</p>
<p>So, first I downloaded the zipcode database at <a class="reference external" href="http://www.mappinghacks.com/data">Mapping
Hacks</a>, and unpacked the zipfile in
my working directory.</p>
<p>Then, I loaded the data into R</p>
<pre class="literal-block">
> zips <- read.table("zipcode.csv",sep=",",quote="\"",header=TRUE)
> names(zips)
[1] "zip" "city" "state" "latitude" "longitude"
[6] "timezone" "dst"
</pre>
<p>So, now I have an R frame containing a lot of US cities, their
geographical coordinates, and their zip codes. So we can start playing
with the plot command! After rooting around a bit, I ended up settling
on the smallest footprint plot dot I could make R produce, by setting
the option pch=20 in the plot options. Hence, I ended up with a command
basically like this:</p>
<pre class="literal-block">
> plot(zips$longitude,zips$latitude,type="p",col=((zips$zip/10000)%%10)+1,pch=20,axes=FALSE,xlab="",ylab="",cex=0.1)
</pre>
<div class="line-block">
<div class="line">where the +1 after the modulus is to make even 0-values plot, and the
cex parameter sets the point size to something small and pretty.</div>
<div class="line-block">
<div class="line"><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/usZip1.png"><img alt="First digit of the USPS zip code" src="http://blog.mikael.johanssons.org/wp-content/usZip1.png" /></a></div>
</div>
</div>
<p>We can continue this, tweaking the divisor to extract all the other
digits of the zip code, and we end up getting:</p>
<pre class="literal-block">
> plot(zips$longitude,zips$latitude,type="p",col=((zips$zip/1000)%%10)+1,pch=20,axes=FALSE,xlab="",ylab="",cex=0.1)
</pre>
<div class="line-block">
<div class="line">and the result</div>
<div class="line-block">
<div class="line"><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/usZip2.png"><img alt="Second digit of the USPS zip code" src="http://blog.mikael.johanssons.org/wp-content/usZip2.png" /></a></div>
</div>
</div>
<pre class="literal-block">
> plot(zips$longitude,zips$latitude,type="p",col=((zips$zip/100)%%10)+1,pch=20,axes=FALSE,xlab="",ylab="",cex=0.1)
</pre>
<div class="line-block">
<div class="line">and the result</div>
<div class="line-block">
<div class="line"><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/usZip3.png"><img alt="Third digit of the USPS zip code" src="http://blog.mikael.johanssons.org/wp-content/usZip3.png" /></a></div>
</div>
</div>
<pre class="literal-block">
> plot(zips$longitude,zips$latitude,type="p",col=((zips$zip/10)%%10)+1,pch=20,axes=FALSE,xlab="",ylab="",cex=0.1)
</pre>
<div class="line-block">
<div class="line">and the result</div>
<div class="line-block">
<div class="line"><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/usZip4.png"><img alt="Fourth digit of the USPS zip code" src="http://blog.mikael.johanssons.org/wp-content/usZip4.png" /></a></div>
</div>
</div>
<p>and finally</p>
<pre class="literal-block">
> plot(zips$longitude,zips$latitude,type="p",col=((zips$zip/1)%%10)+1,pch=20,axes=FALSE,xlab="",ylab="",cex=0.1)
</pre>
<div class="line-block">
<div class="line">and the result</div>
<div class="line-block">
<div class="line"><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/usZip5.png"><img alt="Fifth digit of the USPS zip code" src="http://blog.mikael.johanssons.org/wp-content/usZip5.png" /></a></div>
</div>
</div>
<p>And then, of course, we can zoom in on data too. So we can do things
like extracting Californian zip codes</p>
<pre class="literal-block">
> cazips <- zips[zips$state == "CA",]
> plot(cazips$longitude,cazips$latitude,type="p",col=((cazips$zip/1000)%%10)+1,pch=20,axes=FALSE,xlab="",ylab="",cex=0.5)
</pre>
<div class="line-block">
<div class="line">to get</div>
<div class="line-block">
<div class="line"><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/caZip2.png"><img alt="Second digit of the Californian USPS zip code" src="http://blog.mikael.johanssons.org/wp-content/caZip2.png" /></a></div>
<div class="line">or, we could extract New York zip codes:</div>
</div>
</div>
<pre class="literal-block">
> nyzips <- zips[zips$state == "NY",]
> plot(nyzips$longitude,nyzips$latitude,type="p",col=((nyzips$zip/100)%%10)+1,pch=20,axes=FALSE,xlab="",ylab="",cex=0.5)
</pre>
<div class="line-block">
<div class="line"><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/nyZip3.png"><img alt="Third digit of the New York USPS zip code" src="http://blog.mikael.johanssons.org/wp-content/nyZip3.png" /></a></div>
<div class="line-block">
<div class="line">or even extract, say, the zip codes starting with 10 or 11, covering
New York City and surroundings and take a closer look</div>
</div>
</div>
<pre class="literal-block">
> ny10zips <- nyzips[nyzips$zip<12000,]
> ny10zips <- ny10zips[ny10zips$zip>9999,]
> plot(ny10zips$longitude,ny10zips$latitude,type="p",col=((ny10zips$zip/100)%%10)+1,pch=20,axes=FALSE,xlab="",ylab="",cex=1.0)
</pre>
<p><a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/ny10Zip3.png"><img alt="Third digit of the USPS zip codes 10xxx and 11xxx" src="http://blog.mikael.johanssons.org/wp-content/ny10Zip3.png" /></a></p>
Gröbner bases for Operads — or what I did in my vacation2009-05-08T18:28:00+02:00Michitag:blog.mikael.johanssons.org,2009-05-08:grobner-bases-for-operads-or-what-i-did-in-my-vacation.html<p>This post is to give you all a very swift and breakneck introduction to
Gröbner bases; not trying to be a nice and soft introduction, but much
more leading up to announcing the latest research output from me.</p>
<p>Recall how you would run the Gaussian algorithm on a matrix. You'd take
the leftmost upmost non-zero entry, divide its row by its value, and
then use that row to eliminate anything in the corresponding column.</p>
<p>Once we have the matrix on echelon form, we can then do lots of things
with it. Importantly, we can use substitution using the leading terms to
do equation system solving.</p>
<p>The starting point for the theory of Gröbner bases was that the same
method could be used - with some modification - to produce something
from a bunch of polynomials that ends up being as useful as a
row-reduced echelon form.</p>
<p>Basically, the central theorem-definition of Gröbner bases (by
Buchberger, named for his thesis advisor Gröbner), says that a Gröbner
basis for some ideal in some polynomial ring is a bunch of generators
for that ideal such that if we do polynomial division of any element of
the polynomial in the ring with the generators, one after the other,
until no more divisions could be performed, then what we get out of it
all is uniquely determined.</p>
<p>This need not necessarily be the case, even to begin with - we could get
weird loops and various kinds of bad behaviour that screws up the
concept of dividing, with remainder, by a whole bunch of polynomials.
Having a Gröbner basis means this is no longer a problem.</p>
<p>The important bit of the theorem is that we can GET a Gröbner basis by
just iteratively trying out combinations of the generators, trying to
find overlaps of the leading terms, and generating "bad examples". Any
bad example that doesn't get completely reduced to 0 by division by the
generators is also needed to complete the Gröbner basis, and by
adjoining it we grow our generating set, but not the things it
generates. And the method works by growing the generating set until
anything we can build out of two of the generators really will reduce
completely to 0.</p>
<p>And, the central theorem says, if THIS works, then reduction works in
general, and we have a tool as good for solving systems of polynomial
equations as the echelon form is for linear equation systems.</p>
<div class="section" id="losing-commutativity">
<h2>Losing commutativity</h2>
<p>The next interesting step is to look at non-commutative polynomial
rings. Here, we suddenly have a deep theoretic issue popping up. We know
that the <em>word problem</em> for monoids - i.e. whether a specific string can
be reduced with rewriting rules to an empty string - is unsolvable in
general. It hooks up to Turing machines and the limits of theoretical
computer science, but in essence we can encode problems as Gröbner basis
computations for non-commutative algebras that we know cannot possibly
be solved in finite time.</p>
<p>So we cannot hope for the situation to be as good as for commutative
algebras. However, there is a theorem floating around - the Diamond
lemma by Bergman - that says that if we DO get a Gröbner basis
computation that halts in finite time, then all the good things that we
had in the commutative case - such as reductions modulo the generators
being well defined - hold for the things we've computed. In other words,
the only thing that could go wrong would be that the computation of the
Gröbner basis doesn't finish in finite time.</p>
</div>
<div class="section" id="operads">
<h2>Operads</h2>
<p>Now, what I really wanted to talk about was <em>operads</em>.</p>
<p>Here, an operad is a collection {O(n) : n?1} of vector spaces with
permutations attached to them. Hence, for each n, there is a map
S<sub>n</sub> ← Hom(O(n),O(n)), or in other words, we can apply any
permutation of n things to any element in the component O(n) and get
something back out of it.</p>
<div class="line-block">
<div class="line">Furthermore, an operad O = {O(n)} has defined on it structure
operations. These behave like the composition of multilinear functions
- so we have a way of plugging one function into another:</div>
<div class="line-block">
<div class="line">f(a1,a2,...,g(ai,...,aj),...,an)</div>
<div class="line">and this composition is associative, so the order we figure out some
sequence of compositions doesn't matter.</div>
</div>
</div>
<p>These gadgets show up in modern approaches to universal algebra like
questions, and also all over the place in topology; some of the earliest
instances were from homotopy theory.</p>
<p>Recently, Dotsenko and Khoroshkin released a paper on <a class="reference external" href="http://arxiv.org/abs/0812.4069">Gröbner bases for
operads</a>. In the paper, they figure
out a way to find the kind of canonical and ordered basis that you need
in order to mimic the whole workflow of Gröbner bases above; and
specified how one should go about producing a Diamond Lemma for operads.
It turns out that Gröbner bases would be useful to prove operads to be
Koszul - something I won't discuss here, but which is important in the
operad theory context.</p>
<p>Basically, instead of working in a polynomial ring on some set of
variables, we start out with out variables in one of these graded sets
{V(n)}. Then we can build rooted trees, whose internal vertices of
degree n+1 are labeled by elements from V(n).</p>
<p>The vector space spanned by all such trees forms an operad where the
composition operation works by taking a tree and attaching the root to
the leaf numbered by whatever position we need to compose at.</p>
<p>The resulting construction is the free operad on the generating set and
takes the role that the polynomial ring had in the previous examples.
And what Dotsenko and Khoroshkin pointed out was that if we restrict the
kinds of actions we allow from permutations on these trees somewhat, we
end up with something that has exactly one representative that fits in
the restricted context for each tree that occurs as a basis element of
the free operad.</p>
<p>So we can impose some sort of ordering on these trees, and use them to
mimic the Diamond lemma.</p>
<p>Indeed, what we end up doing is forming the overlaps between leading
terms by finding trees that parts of the trees from the leading terms
from pairs of operad elements can embed into, and using the resulting
procedure to build the same kind of bad cases we need to test for
Buchberger's algorithm.</p>
<p>And again, it turns out that while we may not always get an answer
within finite time, if we're lucky then the answer we DO get has all the
properties we could dream of. And not only that - all the previous kinds
of Gröbner bases embed as special cases of doing it this way.</p>
<p>I heard of this, and got my hands on the paper, when I first arrived in
CIRM in Luminy, outside Marseille, for 2 weeks of operad theory with a
master's course and a conference on the subject. And I got so excited -
once upon a time this was essentially my proposal for PhD thesis project
- that I decided to sit down and code the whole thing up right away!</p>
<p>And code away I did. Once the two weeks were gone, with the valuable
help from Vladimir Dotsenko and Eric Hoffbeck, I had ended up with a
working implementation in Haskell of the whole paradigm. It's now
<a class="reference external" href="http://hackage.haskell.org/cgi-bin/hackage-scripts/package/Operads">available from the
HackageDB</a>,
and runs at least in GHC 6.8 and 6.10:</p>
<pre class="literal-block">
ghci -cpp Math.Operad
GHCi, version 6.10.1: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer ... linking ... done.
Loading package base ... linking ... done.
[1 of 6] Compiling Math.Operad.PPrint ( Math/Operad/PPrint.hs, interpreted )
[2 of 6] Compiling Math.Operad.OrderedTree ( Math/Operad/OrderedTree.hs, interpreted )
[3 of 6] Compiling Math.Operad.Map ( Math/Operad/Map.hs, interpreted )
[4 of 6] Compiling Math.Operad.MapOperad ( Math/Operad/MapOperad.hs, interpreted )
[5 of 6] Compiling Math.Operad.OperadGB ( Math/Operad/OperadGB.hs, interpreted )
[6 of 6] Compiling Math.Operad ( Math/Operad.hs, interpreted )
Ok, modules loaded: Math.Operad, Math.Operad.OperadGB, Math.Operad.OrderedTree, Math.Operad.PPrint, Math.Operad.MapOperad, Math.Operad.Map.
*Math.Operad> let v = corolla 2 [1,2]
Loading package mtl-1.1.0.2 ... linking ... done.
*Math.Operad> let g1t1 = nsCompose 1 v v
Loading package syb ... linking ... done.
Loading package array-0.2.0.0 ... linking ... done.
Loading package containers-0.2.0.0 ... linking ... done.
*Math.Operad> let g1t2 = nsCompose 2 v v
*Math.Operad> let g2t2 = shuffleCompose 1 [1,3,2] v v
*Math.Operad> let g1 = (oet g1t1) + (oet g1t2) :: FreeOperad Integer
*Math.Operad> let g2 = (oet g2t2) - (oet g1t2) :: FreeOperad Integer
*Math.Operad> let ac = [g1,g2]
*Math.Operad> :set +s
*Math.Operad> let acGB = operadicBuchberger ac
(0.00 secs, 524996 bytes)
*Math.Operad> pP acGB
[
+1 % 1*m2(1,m2(2,m2(3,4))),
+1 % 1*m2(m2(1,2),3)
+1 % 1*m2(1,m2(2,3)),
+1 % 1*m2(m2(1,3),2)
+(-1) % 1*m2(1,m2(2,3))]
(0.41 secs, 55184352 bytes)
</pre>
<div class="line-block">
<div class="line">And we see here that with the particular set of generators given,
namely using m2 as the variable, and for a generating set, we pick</div>
<div class="line-block">
<div class="line">m2(m2(x1,x2),x3) + m2(x1,m2(x2,x3))</div>
<div class="line">and</div>
<div class="line">m2(m2(x1,x3),x2) - m2(x1,m2(x2,x3))</div>
</div>
</div>
<div class="line-block">
<div class="line">we get a Gröbner basis almost instantly containing additionally the
generator</div>
<div class="line-block">
<div class="line">m2(x1,m2(x2,m2(x3,x4)))</div>
</div>
</div>
</div>
1-manifolds and curves2009-05-02T12:10:00+02:00Michitag:blog.mikael.johanssons.org,2009-05-02:1-manifolds-and-curves.html<p>I have been painfully remiss in keeping this blog up and running lately.
I wholeheartedly blame the pretty intense travel schedule I've been on
for the last month and a half.</p>
<p>To get back into the game, I start things off with a letter from a
reader. Rodolfo Medina write:</p>
<blockquote>
<p>Hallo, Michi:</p>
<div class="line-block">
<div class="line">surfing around in internet, looking for an answer to my question,
I fell into</div>
<div class="line-block">
<div class="line">your web site.</div>
</div>
</div>
<p>I'm looking for an answer to the following question:</p>
<div class="line-block">
<div class="line">my intuitive idea is that a one-dimensional connected topological
submanifold</div>
<div class="line-block">
<div class="line">of a topological space S should necessarily be the codomain of a
curve (if we</div>
<div class="line">define a curve to be a continuous map from an R interval into a
topological</div>
<div class="line">space).</div>
</div>
</div>
<div class="line-block">
<div class="line">Conversely, the codomain of an injective curve, defined in an open
R interval,</div>
<div class="line-block">
<div class="line">should necessarily be a one-dimensional topological submanifold
of S.</div>
</div>
</div>
<div class="line-block">
<div class="line">Do you think that's true?, and, if so, how could it be
demonstrated? The</div>
<div class="line-block">
<div class="line">difficulty of the first statement is to paste together all charts
so to create</div>
<div class="line">a unique homeomorfism.</div>
</div>
</div>
<div class="line-block">
<div class="line">Thanks for any reply</div>
<div class="line-block">
<div class="line">Rodolfo</div>
</div>
</div>
</blockquote>
<p>So, let's see if we can assemble an answer. I started writing an email
answer, which started ballooning way out of control; so after having
checked some details with a colleague, I actually have an answer.</p>
<p>The question is in two parts. The first is whether any connected
1-dimensional topological manifold is a curve, viz. an image of an open
interval under a continuous map.</p>
<p>This follows since the manifold is second countable, so we can pick a
basis for the topology where each piece looks like an open interval, and
just glue them together in order to find the curve parametrization.</p>
<p>The second is whether any image of an open interval is a topological
1-manifold.</p>
<div class="line-block">
<div class="line">For this, the answer is no. Consider the map illustrated by the
following picture:</div>
<div class="line"><br /></div>
</div>
<div class="line-block">
<div class="line">Note that this is non-self-intersecting since the loop never really
reaches the curve, it only ever comes infinitesimally close. However,
since it comes so close, any neighbourhood of the corresponding
meeting point will look something like this:</div>
<div class="line"><br /></div>
<div class="line-block">
<div class="line">and hence will never be homeomorphic to an interval.</div>
</div>
</div>
Applied knot theory2009-03-16T02:41:00+01:00Michitag:blog.mikael.johanssons.org,2009-03-16:applied-knot-theory.html<p>Tech note: All figures herewithin are produced in SVG. If you cannot see
them, I recommend you figure out how to view SVGs in your browser.</p>
<p>A few weeks ago, my friend radii was puzzling in his server hall. He
asked if it was possible to prove that what he wanted to do was
impossible, or if he had to remain with his gut feeling. I asked him,
and got the following explanation:</p>
<div class="line-block">
<div class="line">He had two strands of something ropelike, both fixed at large
furnishings at one end, and fixed in a fixed sized loop at the other.
He wanted to take these, and link them fast to each other in this
fashion:</div>
<div class="line"><br /></div>
</div>
<p>I started thinking about the problem, and am now convinced I can prove
the impossibility he asked for by basic techniques of knot theory. The
argument is what I'll fill this blog post about.</p>
<p>First observation is that due to the size of the fixtures, we can
essentially consider the endpoints fixed. Not only that, but since we
cannot thread the loops over the fixtures, there's no way to just stick
that loose end through any of the loops. So we can basically extract a
cube of space and require that all of our modifications be contained
completely within this cube, and then stick the endpoints just outside
that cube.</p>
<p>This is something called a <em>framed knot</em>, and a popular object of study.
My argument is going to manage to steer basically entirely clear of
this, though, and I'll just mention it as an extra property to remember.</p>
<div class="line-block">
<div class="line">The second observation is that the size of the loops is basically
irrelevant. So we can make the length of the single strand part as
small as we like, and as close to the fixed end as we like. Hence, for
all purposes, we essentially want to produce a link like this:</div>
<div class="line"><br /></div>
</div>
<p>And this is essentially the problem I will approach. Now, I'm going to
prove the impossibility by a method similar to how we prove that the
<a class="reference external" href="http://en.wikipedia.org/wiki/Trefoil_knot">trefoil</a> and
<a class="reference external" href="http://en.wikipedia.org/wiki/Unknot">unknot</a> are different; using
three-colorability.</p>
<p>Equality of knots is basically equivalent to being able to go from the
drawing of one knot to the drawing of another knot by the <a class="reference external" href="http://en.wikipedia.org/wiki/Reidemeister_move">Reidemeister
moves</a>. I warmly
recommend the wikipedia page for a first glance - these turn out to be
absolutely central for constructing knot invariants.</p>
<div class="line-block">
<div class="line">For the trefoil argument, we start with discussing coloration of
knots. Given three colors: red, green and blue, we may color each
strand in a knot diagram with one of the colors. We require that at
every crossing, each strand is either the same color, or different
color. Obviously, every knot, including the unknot, can be colored in
all the same color. And it turns out that the Reidemeister moves are
compatible with knot coloration. The proof of this is entirely
pictorial; up to a permutation of the colors, the following are all
the options we have:</div>
<div class="line-block">
<div class="line">Reidemeister I:</div>
<div class="line"><br /></div>
</div>
</div>
<div class="line-block">
<div class="line">Reidemeister II:</div>
<div class="line"><br /></div>
</div>
<div class="line-block">
<div class="line">Reidemeister III:</div>
<div class="line"><br /></div>
<div class="line"><br /></div>
<div class="line"><br /></div>
</div>
<p>The take home message is that for every crossing configuration, with
fixed frame outside, all Reidemeister moves respect the colors at the
frame.</p>
<div class="line-block">
<div class="line">So, how may we apply this to our problem? We still have the
characterization of knot equality by finding sequences of Reidemeister
moves. So we'll start with the unknotted version of radii's problem,
and a chosen coloration:</div>
<div class="line"><br /></div>
</div>
<p>Notice that in the diagrams above, the only way to introduce a new color
is to already have at least two colors present. Hence, any Reidemeister
moves made on this untied configuration will stay entirely green.</p>
<div class="line-block">
<div class="line">However, the target configuration does have this coloration:</div>
<div class="line"><br /></div>
<div class="line-block">
<div class="line">Notice that the fixture coloration is identical to the one in the
unknotted version, but that this version has all colors represented.
However, since the Reidemeister moves cannot possibly introduce a new
color starting in the untied version, they can never arrive at this
particular configuration.</div>
</div>
</div>
Picking fights over religion2009-02-04T10:16:00+01:00Michitag:blog.mikael.johanssons.org,2009-02-04:picking-fights-over-religion.html<p>I suspect this will be a flame war magnet. On the other hand I feel
compelled to write it.</p>
<p>First a bit of backstory. My wife enjoys, often and with engagement,
discussing theology with her new friends. One of them, a pentecostal
christian, gave her the book <em>I don't have enough faith to be an
atheist</em> by Norman Geisler and Frank Turek. I picked it up while
visiting her, looking for some book to read, and have forced myself to
read through most of it since.</p>
<p>The authors try to <em>prove</em> the correctness of Christianity over all
other religious attitudes, but most importantly, prove that Christians
are right and Atheists are wrong. And the way they do this is oftentimes
insulting, very often ignorant of how to deal with the logical tools
they try to use, and constantly reeking of a lack of objectivity in
their purportedly objective exposition.</p>
<p>I'll point out a few points which really grated when I read them, or
which came across as just purely obnoxious in this post, and discuss why
I take such issue with them.</p>
<div class="section" id="bad-background-research">
<h2>Bad background research</h2>
<blockquote>
I later learned that my expectations were too high for the modern
university. The term "university" is actually a composite of the
words "unity" and "diversity". When one attends a university, he is
supposed to be guided in the quest to find unity in diversity -
namely how all the diverse fields of knowledge (the arts,
philosophy, the physical sciences, mathematics, etc) fit together to
provide a unified picture of life. A tall task indeed, but one that
the modern university has not only abandoned, but reversed. Instead
of <em>uni</em>versities, we now have <em>plura</em>versities, institutions
that deem every viewpoint, no matter how ridiculous, just as valid
as any other - that is, except the viewpoint that just one religion
or worldview could be true. That's the one viewpoint considered
intolerant and bigoted on most college campuses.</blockquote>
<p>(emphasis as printed, G&T p 19)</p>
<p>First off, the stated etymology is false. The <a class="reference external" href="http://www.etymonline.com/index.php">Online Etymology
Dictionary</a> states</p>
<blockquote>
c.1300, "institution of higher learning," also "body of persons
constituting a university," from Anglo-Fr. universit</blockquote>
</div>
Homological Inclusion-Exclusion and the Mayer-Vietoris sequence2009-01-09T22:44:00+01:00Michitag:blog.mikael.johanssons.org,2009-01-09:homological-inclusion-exclusion-and-the-mayer-vietoris-sequence.html<p>This blogpost is inspired to a large part by comments made by Rob
Ghrist, in connection to his talks on using the Euler characteristic
integration theory to count targets detected by sensor networks.</p>
<div class="line-block">
<div class="line">He pointed out that the underlying principle inducing the rule</div>
<div class="line-block">
<div class="line">[tex]\chi(A\cup B) = \chi(A)+\chi(B)-\chi(A\cap B)[/tex]</div>
<div class="line">goes under many names, among those \emph{Inclusion-Exclusion},
favoured among computer scientists (and combinatoricists). He also
pointed out that the origin of this principle is the Mayer-Vietoris
long exact sequence</div>
<div class="line">[tex]\cdots\to H_{n}(A\cap B)\to H_{n}(A)\oplus H_{n}(B)\to
H_{n}(A\cup b)\to\cdots[/tex]</div>
</div>
</div>
<p>In this blog post, I'd like to give more meat to this assertion as well
as point out how the general principle of Inclusion-Exclusion for finite
sets follows immediately from Mayer-Vietoris.</p>
<div class="section" id="inclusion-exclusion-and-the-passage-from-two-sets-to-many">
<h2>Inclusion-Exclusion, and the passage from two sets to many</h2>
<div class="line-block">
<div class="line">The basic principle of Inclusion-Exclusion says that if we have two
sets, [tex]A[/tex] and [tex]B[/tex], then the following relationship
of cardinalities holds:</div>
<div class="line-block">
<div class="line">[tex]|A\cup B| = |A| + |B| - |A\cap B[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">A first proof would be performed by referring to an appropriate Venn</div>
<div class="line-block">
<div class="line">diagram, or by pointing out that if we try to count all the elements</div>
<div class="line">of [tex]A\cup B[/tex] by adding the counts for [tex]A[/tex] and
[tex]B[/tex], then we've overcounted the elements of [tex]A\cap
B[/tex], and thus need to subtract these.</div>
</div>
</div>
<div class="line-block">
<div class="line">Now, this is good and convincing for the situation where we have just
two sets, but what if we had a whole family? What if we had
[tex]A_{1},\dots,A_{n}[/tex]? Well, our old friend the complete
induction rides to our rescue. Suppose we had already proven that for
[tex]n-1[/tex] sets [tex]A_{i}[/tex], the following formula holds:</div>
<div class="line-block">
<div class="line">[tex]\left|\bigcup_{i\in\{1,\dots,n-1\}} A_{i}\right| =</div>
<div class="line">\sum_{k=1}^{n-1} (-1)^{1+k}\sum_{S\subseteq\{1,\dots,n-1\},
|S|=k}\left|\bigcap_{s\in S} A_{s}\right|[/tex]</div>
</div>
</div>
<p>If you think this formula is imposing, let me show you the first few
ways it works out, so that you can get a feeling for it:</p>
<p>[tex]|A_{1}\cup A_{2}| = |A_{1}| + |A_{2}| - |A_{1}\cap
A_{2}[/tex]</p>
<div class="line-block">
<div class="line">[tex]</div>
<div class="line-block">
<div class="line">\begin{multline*}</div>
<div class="line">|A_{1}\cup A_{2}\cup A_{3}| = \\</div>
<div class="line">|A_{1}|+|A_{2}|+|A_{3}|-|A_{1}\cap A_{2}|-|A_{1}\cap
A_{3}|-|A_{2}\cap A_{3}|+\\</div>
<div class="line">|A_{1}\cap A_{2}\cap A_{3}|</div>
<div class="line">\end{multline*}[/tex]</div>
</div>
</div>
<p>Here it starts becoming apparent why it is called the
Inclusion-Exclusion formula: we include all elements from all sets, but
then we've gotten too many elements from the pairwise intersections, so
we exclude those, but then we've gotten too few elements from the triple
intersections, so we re-include those, but then we've gotten too few
elements from the quadruple intersections - and on it goes.</p>
<p>If our assumption holds, we can consider what the appropriate size of
the union of [tex]n[/tex] different sets should be. We can pick one set,
say [tex]A_{n}[/tex], and treat it separately. Then, certainly, the
union of the rest has size given by the formula, by our induction
assumption. So, consider what happens when we add in the elements in
[tex]A_{n}[/tex]. We then overcount all elements in any of the
intersections [tex]A_{i}\cap A_{n}[/tex], so we have to subtract
those. But we then have undercounted all elements in any intersection on
the form [tex]A_{i}\cap A_{j}\cap A_{n}[/tex], so we need to add
those back in. And so we go on, and for each intersection that occurred
in the formula for [tex]n-1[/tex] sets, we find ourselves forced to add
one new term, of the opposite sign, that counts the intersection of that
old term with [tex]A_{n}[/tex].</p>
<p>Thus, the new formula takes on the exact guise of the general formula we
wanted to prove, since the terms involving [tex]A_{n}[/tex] are
precisely indexed, in that formula, by the terms excluding
[tex]A_{n}[/tex], but with opposite signs.</p>
</div>
<div class="section" id="encoding-set-sizes-homologically">
<h2>Encoding set sizes homologically</h2>
<p>So, now that we know that pairwise Inclusion-Exclusion implies the
general Inclusion-Exclusion, let us take a look on how we can express
Inclusion-Exclusion with homology. We note first of all that
[tex]H_{0}(X)[/tex] counts the number of path-connected components of a
space [tex]X[/tex]. This follows easily from a few properties. First, a
basis for the [tex]0[/tex]-chains of [tex]X[/tex] is given by the points
in [tex]X[/tex]. Second, two [tex]0[/tex]-chains are homologous
precisely if there is a [tex]1[/tex]-chain whose boundary is the
difference between those [tex]0[/tex]-chains. The boundary of a path is
the difference of its endpoints, so we end up with two basis elements
being homologous precisely if there is a path connecting them.</p>
<p>Thus, two [tex]0[/tex]-chain elements are homologous precisely if
they're in the same path-component of [tex]X[/tex]. So, the homology
classes of [tex]0[/tex]-chains are precisely the path-connected
components.</p>
<p>So, the [tex]0[/tex]-degree homology classes count path-components. How
do we use that? Well ... if we consider a finite set [tex]X[/tex] with
the discrete topology - i.e. open sets are single points and any unions
of single point sets, then a path is a continuous function
[tex][0,1]\to X[/tex], i.e. one where the preimage of any open set is
open. Thus, the preimage of any point is an open set. Suppose a path
contains more than one point. Then we can partition the interval
[tex][0,1][/tex] into two disjoint open sets, by taking one set to be
the preimage of one of the points, and the other set to be the preimage
of everything else. However, the interval [tex][0,1][/tex] is connected,
which means that if there are two such open sets, one of them has to be
empty.</p>
<p>Thus, the path-connected components of a discrete topological space are
the points themselves. So if [tex]X[/tex] is discrete, then
[tex]H_{0}(X) = |X|[/tex].</p>
</div>
<div class="section" id="long-exact-sequences-in-homology-and-the-mayer-vietoris-sequence">
<h2>Long exact sequences in homology and the Mayer-Vietoris sequence</h2>
<div class="line-block">
<div class="line">It is a theorem from homological algebra that if we have a short exact
sequence of chain complexes</div>
<div class="line-block">
<div class="line">[tex]0\to A_{*}\to B_{*}\to C_{*}\to 0[/tex]</div>
<div class="line">then there is a long exact sequence in homology given by</div>
<div class="line">[tex]\cdots\to H_{n}(A_{*})\to H_{n}(B_{*})\to
H_{n}(C_{*})\to H_{n-1}(A_{*})\to\cdots[/tex]</div>
</div>
</div>
<p>Almost all interesting long exact sequences in topology are special
cases of this particular fact. We will derive one interesting sequence:
the Mayer-Vietoris sequence from this.</p>
<p>Take a topological space [tex]X[/tex]. The singular chain complex
[tex]C_{*}(X)[/tex] forms a chain complex with [tex]C_{n}(X)[/tex]
the free abelian group of [tex]n[/tex]-chains and the differential given
by the usual formula.</p>
<div class="line-block">
<div class="line">Suppose we have two spaces: [tex]X[/tex] and [tex]Y[/tex]. Then any
[tex]n[/tex]-chain on [tex]X\cap Y[/tex] is an [tex]n[/tex]-chain on
[tex]X[/tex] and an [tex]n[/tex]-chain on [tex]Y[/tex]. So we get an
inclusion map [tex]C_{n}(X\cap Y)\to C_{n}(X)\oplus
C_{n}(Y)[/tex] by [tex]\sigma\mapsto(\sigma,-\sigma)[/tex]. We
choose this sign instead of the image [tex](\sigma,\sigma)[/tex] to
make the next step cleaner. It is an injection because
[tex]C_{n}[/tex] is a free abelian group, and the injections</div>
<div class="line-block">
<div class="line">[tex]X\leftarrow X\cap Y\rightarrow Y[/tex] are set maps on bases
for the corresponding chain groups.</div>
</div>
</div>
<p>Compatibility of this map with the differential is almost as easy. If we
first apply the differential and then map into the direct sum, we get
the map [tex]\sigma\mapsto d\sigma\mapsto(d\sigma,-d\sigma)[/tex].
If we instead first map into the direct sum, and then apply the
differentials, we get the map
[tex]\sigma\mapsto(\sigma,-\sigma)\mapsto(d\sigma,-d\sigma)[/tex].
These are equal, and thus the map we've constructed is a chain map.</p>
<p>So, [tex]C_{*}(X\cap Y)\to C_{*}(X)\oplus C_{*}(Y)[/tex] is an
injective map of chain complexes.</p>
<p>If we now have a pair of chains, we can construct a chain on the union
of the two spaces. An [tex]n[/tex]-chain on [tex]X[/tex] is
automatically an [tex]n[/tex]-chain on [tex]X\cup Y[/tex]. Similarly,
an [tex]n[/tex]-chain on [tex]Y[/tex] is automatically an
[tex]n[/tex]-chain on [tex]X\cup Y[/tex]. So both [tex]C_{n}(X)[/tex]
and [tex]C_{n}(Y)[/tex] inject into [tex]C_{n}(X\cup Y)[/tex] by the
same kind of argument as with the injection [tex]C_{n}(X\cap Y)\to
C_{n}(X)[/tex].</p>
<p>The injections we get this way can be put together to a map
[tex]C_{n}(X)\oplus C_{n}(Y)\to C_{n}(X\cup Y)[/tex] by, for
instance, [tex](\sigma,\tau)\mapsto\sigma+\tau[/tex]. I am going to
fudge over the issue of whether the [tex]n[/tex]-chains on [tex]X[/tex]
and [tex]Y[/tex] form a generating set for the [tex]n[/tex]-chains on
[tex]X\cup Y[/tex], and just take it on faith that this works. The
details are not particularly difficult - they are just obnoxiously long
and annoying.</p>
<p>Now, given that we believe we can get a generating set by taking the
union of the bases for the [tex]n[/tex]-chains on [tex]X[/tex] and
[tex]Y[/tex], then this means that the map I gave above is surjective.
Indeed, a basis element [tex]\sigma\in C_{n}(X)[/tex] is hit by the
element [tex](\sigma,0)[/tex], and a basis element [tex]\tau\in
C_{n}(Y)[/tex] is hit by the element [tex](0,\tau)[/tex].</p>
<div class="line-block">
<div class="line">This argument proves exactness at the first and last term of the short
exact sequence</div>
<div class="line-block">
<div class="line">[tex]0\to C_{*}(X\cap Y)\to C_{*}(X)\oplus C_{*}(Y)\to
C_{n}(X\cup Y)\to 0[/tex]</div>
<div class="line">and it remains to prove exactness at the middle term. So we need to
prove that</div>
<div class="line">[tex]\mathrm{ker}((\sigma,\tau)\mapsto\sigma+\tau) =
\mathrm{im}(\sigma\mapsto(\sigma,-\sigma))[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">Now, since [tex]\sigma+(-\sigma)=0[/tex], the inclusion of the image
in the kernel is obvious. So, suppose now that [tex]\sigma\in
C_{n}(X)[/tex], [tex]\tau\in C_{n}(Y)[/tex] and
[tex]\sigma+\tau=0[/tex]. Then, in [tex]C_{n}(X\cup Y)[/tex], we
know that [tex]\sigma=-\tau[/tex]. But then, [tex]\tau\in
C_{n}(X)[/tex] since [tex]\sigma\in C_{n}(X)[/tex] and similarly
[tex]\sigma\in C_{n}</div>
<div class="line-block">
<div class="line">(Y)[/tex] since [tex]\tau\in C_{n}(Y)[/tex]. So
[tex]\sigma,\tau\in C_{n}(X\cap Y)[/tex] and
[tex]\tau=-\sigma[/tex]. But then this kernel element really is in
the image of the inclusion we gave.</div>
</div>
</div>
<div class="line-block">
<div class="line">So this is a short exact sequence of chain complexes. It follows from
the theorem of long exact sequences in homology that there is an
associated long exact sequence, called the Mayer-Vietoris sequence:</div>
<div class="line-block">
<div class="line">[tex]\cdots\to H_{n}(X\cap Y)\to H_{n}(X)\oplus H_{n}(Y)\to
H_{n}(X\cup Y)\to H_{n-1}(X\cap Y)\to\cdots[/tex]</div>
</div>
</div>
</div>
<div class="section" id="putting-it-all-together">
<h2>Putting it all together</h2>
<p>Suppose [tex]X[/tex] is a discrete topological space. Then any
continuous map from a [tex]k[/tex]-simplex [tex]\Delta^{k}[/tex] to
[tex]X[/tex] is by necessity constant at one point. So all the singular
simplices in [tex]X[/tex] are constant maps, and thus a basis for any
[tex]n[/tex]-chain group is given by just the points in [tex]X[/tex] as
such. The differential acts on such a constant map by sending it to an
alternating sum of identical maps the cardinality of which is one more
than the dimension of the simplex. So the differential vanishes at every
other point in the chain complex, and is an isomorphism on the rest. But
then we have exactly two possibilities for a homology computation:</p>
<p>Case 1: [tex]C_{n}(X)\overset1\to C_{n-1}(X)\overset0\to
C_{n-2}(X)[/tex]</p>
<p>Here, the kernel is the entire chain group. But so is the image. Thus
the homology group is trivial.</p>
<p>Case 2: [tex]C_{n}(X)\overset0\to C_{n-1}(X)\overset1\to
C_{n-2}(X)[/tex]</p>
<p>Here, the kernel is trivial, but then again, so is the image. Thus,
again, so is the homology group.</p>
<p>The only part not really covered by this is the homology group
[tex]H_{0}(X)[/tex], since this is computed from a diagram on the shape
[tex]C_{1}(X)\overset0\to C_{0}(X)\to0[/tex]. So
[tex]H_{0}(X)[/tex] behaves like we showed above, and is the only
non-trivial homology group of the discrete space.</p>
<div class="line-block">
<div class="line">But then, the only bit that remains non-zero in the Mayer-Vietoris
sequence is</div>
<div class="line-block">
<div class="line">[tex]0\to H_{0}(X\cap Y)\to H_{0}(X)\oplus H_{0}(Y)\to
H_{0}(X\cup Y)\to 0[/tex]</div>
</div>
</div>
<p>If we have a short exact sequence of free abelian groups, then this
relates the number of basis elements in each of the groups with the
structure of the sequence. Let our short exact sequence be given by
[tex]0\to A\to B\to C\to 0[/tex]. then, by injectivity of the map
[tex]A\to B[/tex], the cardinality of a basis of the image in
[tex]B[/tex] is equal to the cardinality of a basis for [tex]A[/tex]. By
surjectivity of the map [tex]B\to C[/tex], and by Noether's theorem,
the cardinality of a basis of [tex]B[/tex] is equal to the cardinality
of a basis of the kernel of this map, plus the cardinality of a basis of
[tex]C[/tex]. By exactness in the middle, this kernel basis is equal to
the image basis from A. So, we have, writing [tex]|G|[/tex] for the
cardinality of a basis of the free abelian group [tex]G[/tex], that
[tex]|B| = |A|+|C|[/tex]. Thus, [tex]|A|-|B|+|C|=0[/tex].</p>
<div class="line-block">
<div class="line">And if we apply this to the Mayer-Vietoris sequence fragment we
acquired, it follows that</div>
<div class="line-block">
<div class="line">[tex]|X\cap Y| - |X| - |Y| + |X\cup Y| = 0[/tex]</div>
<div class="line">and thus</div>
<div class="line">[tex]|X\cup Y| = |X| + |Y| - |X\cap Y|[/tex]</div>
</div>
</div>
</div>
J, or how I learned to stop worrying and love the matrix2008-12-10T08:50:00+01:00Michitag:blog.mikael.johanssons.org,2008-12-10:j-or-how-i-learned-to-stop-worrying-and-love-the-matrix.html<p>Or actually, I haven't quite yet.</p>
<p>But, out of a whim, I downloaded <a class="reference external" href="http://www.jsoftware.com">J</a> and
started to play with it while reading
<a class="reference external" href="http://www.cs.trinity.edu/~jhowland/math-talk/functional1/">this</a>
set of neat notes on Functional Programming and J.</p>
<p>And ... well ... my reaction so far is kinda "Buh!? What the *** just
happened there?"</p>
<p>The first example I ran across, tried to read, and finally managed to is
the following:</p>
<pre class="literal-block">
+/ , 5 = q: >: i. 100
</pre>
<p>This snippet is supposed to tell us how many 0s are trailing 100!. To
get at this, we first need to figure out what, exactly, is done here.
First observation is that 100! is the product of all integers from 1 to
100. The second is that the number of 0s trailing this is the same as
the lower of the orders of 2 and 5, respectively, dividing 100!. Which,
in turn is the lower of the numbers of 2s and 5s occurring in the
totality of all prime decompositions of all the integers from 1 to 100.</p>
<p>Now, if [tex]k\cdot 5^n<100[/tex], then certainly [tex]k\cdot
2^n<100[/tex], so the sum of orders of 2 is guaranteed to be larger than
the sum of orders of 5, so it is enough to count the number of 5s
dividing any of the numbers along the way. And THIS is what the program
above computes.</p>
<div class="line-block">
<div class="line">Next thing to figure out is how it does this. It turns out, after some
experimenting, that the best way of reading this is to go through from
right to left and figure out what it all does. All the functions we're
looking at end up being applied in a <em>monadic</em> fashion - which means
something completely different in J than in Haskell - here it means
that the function gets applied to a single argument. So, here goes:</div>
<div class="line-block">
<div class="line"><tt class="docutils literal">i. n</tt> lists the first n integers, starting at 0. So <tt class="docutils literal">i. 100</tt> is
the 1x100 matrix containing 0, 1, 2, ..., 99.</div>
<div class="line"><tt class="docutils literal">>:</tt>, when applied monadically, increments each element it sees by
one. So <tt class="docutils literal">>: i. 100</tt> is the 1x100 matrix containing 1, 2, 3, ...,
100.</div>
<div class="line"><tt class="docutils literal">q:</tt> yields a prime factorization of the integers it sees. So from
<tt class="docutils literal">q: >: i. 100</tt> we get a matrix where each row carries the primes
dividing the row number, with 0 padding at the end.</div>
<div class="line">Then we see the one <em>dyadic</em> (as opposed to monadic) application in
this program. <tt class="docutils literal">5 = mtx</tt> yields a matrix of the same shape as mtx,
but with a 1 whenever the corresponding entry is a 5, and 0 otherwise.
So <tt class="docutils literal">5 = q: >: i. 100</tt> picks out the entries in all the prime
factorizations that are equal to 5.</div>
<div class="line">Then <tt class="docutils literal">,</tt>. This is one of two complementary operators to reshape
matrices. <tt class="docutils literal">,</tt> gets us an 1xn matrix with the same entries, read row
by row, from left to right, as we started with - whereas <tt class="docutils literal">n m $ mtx</tt>
takes the matrix in mtx and builds an nxm matrix out of it. Thus,
<tt class="docutils literal">, 5 = q: >: i. 100</tt> flattens out the matrix with the 0 and 1 we got
out above.</div>
<div class="line">And finally, <tt class="docutils literal">+/</tt> is an example of a J adverb being used. <tt class="docutils literal">+</tt> is
a dyadic function corresponding to, as expected, usual addition. <tt class="docutils literal">/</tt>
is a monadic adverb that takes a dyadic function, and inserts it
between all the elements in the following list. This is essentially
identical to the Haskell call <tt class="docutils literal">foldr</tt>. So <tt class="docutils literal">+/</tt> is more commonly
known as "sum". And thus, <tt class="docutils literal">+/ , 5 = q: >: i. 100</tt> sums up the 0s and
1s produced by picking out all the entries in the prime factorization
matrix that are equal to 5, after flattening the matrix. Or, in other
words, counts the 1 entries. Which is the same as counting the number
of 5s in the prime factorizations of all the integers between 1 and
100.</div>
<div class="line">Note that if we skipped the flattening step given by <tt class="docutils literal">,</tt> then the
application of <tt class="docutils literal">+/</tt> would have just summed each column in the
matrix. So, we would have needed to apply it again to get the total
tally.</div>
</div>
</div>
<p>So far, my impression is essentially that WHOA! That language is compact
to the point of insanity. Once you master the vocabulary, it kinda gives
the feeling that the system has insane amounts of power - but the
vocabulary is kinda imposing, as is the shift in the way I think about
programming. I foresee learning J giving me the same kinds of epiphanies
and major brain twists as Haskell did - so it might well be an
appropriate next challenge.</p>
So that must mean I've been a mathematician since 2005?2008-12-08T04:03:00+01:00Michitag:blog.mikael.johanssons.org,2008-12-08:so-that-must-mean-ive-been-a-mathematician-since-2005.html<p><a class="reference external" href="http://www.thesun.co.uk/sol/homepage/features/article2011061.ece">http://www.thesun.co.uk/sol/homepage/features/article2011061.ece</a></p>
<p>This is a rather atrocious article giving yet another ad hoc "formula"
to compute some numeric measurement of something-or-other. In this
particular case, it's about cleavage, and how to avoid showing too much
of it, but these "formulae" plague us every time some journalist wants
to math up their reporting.</p>
<p>What caught my eye in this particular case was the people they lined up
to back up the story.</p>
<blockquote>
Mathematician William Hartston, who holds an MA in Maths from
Cambridge University, reckons this will save a lot of showbiz
blushes on the red carpet.</blockquote>
Site tweaks and travel plans2008-11-12T08:33:00+01:00Michitag:blog.mikael.johanssons.org,2008-11-12:site-tweaks-and-travel-plans.html<p>I've tweaked the layout of my blog a little bit. Among the more notable
additions to it is the little box with a list of the major travel plans
in my future.</p>
<p>This box is connected to a Google Calendar, public, and maintained from
my normal calendar program, in which I plan to announce travel dates for
any major trips I make as they come up.</p>
<p>Note that currently stored in this calendar are:</p>
<ul>
<li><p>Ypsilanti, MI, November 13-24 2008</p>
</li>
<li><p>Millersville University, Lancaster PA, December 10-19 2008</p>
</li>
<li><p>Christmas, Stockholm, Sweden, December 19 2008-January 4 2009</p>
</li>
<li><p>AMS MAA Joint Mathematics Meeting, Washington DC, January 4-9 2009</p>
</li>
<li><p>DARPA Topological Data Analysis, Santa Barbara CA, January 21-23 2009</p>
</li>
<li><p>AMS Southeastern Regional Meeting, Raleigh NC, April 4-5 2009</p>
</li>
<li><p>Operads thematic school, Luminy, France, April 20-25 2009</p>
</li>
<li><p>Operads international conference, Luminy, France, April 27-30 2009</p>
</li>
<li><p>ACM SOCG, Aarhus, Denmark, June 8-10 2009</p>
</li>
<li><div class="line-block">
<div class="line">De Br</div>
</div>
More on Lichtenstein2008-10-28T23:03:00+01:00Michitag:blog.mikael.johanssons.org,2008-10-28:more-on-lichtenstein.html<div class="line-block">
<div class="line">It turns out that there is even more to say on <a class="reference external" href="http://blog.mikael.johanssons.org/archive/2008/10/on-the-chromatic-number-of-lichtenstein/">the communes of
Lichtenstein.</a></div>
<div class="line-block">
<div class="line">First of all, there is a 5-clique in the communal graph, as Brian
Hayes pointed out. But there are two different excluded subgraphs for
planarity - so if we aren't looking specifically for the chromatic
number, but rather how this graph fails to be a "normal" land map, we
might want to see whether it realizes BOTH.</div>
</div>
</div>
<p>It turns out that it does.</p>
<p>The following are two highlighted versions of the Liechtenstein communal
graph.</p>
<div class="line-block">
<div class="line"><img alt="image0" src="http://blog.mikael.johanssons.org/wp-content/liechtenstein-k5.png" /></div>
<div class="line-block">
<div class="line">The embedded K5 with edges in blue.</div>
</div>
</div>
<div class="line-block">
<div class="line"><img alt="image1" src="http://blog.mikael.johanssons.org/wp-content/liechtenstein-k33.png" /></div>
<div class="line-block">
<div class="line">The embedded K33 with blue and red vertices.</div>
</div>
</div>
On the chromatic number of Lichtenstein2008-10-28T19:41:00+01:00Michitag:blog.mikael.johanssons.org,2008-10-28:on-the-chromatic-number-of-lichtenstein.html<p>Following the featuring of the <a class="reference external" href="http://strangemaps.wordpress.com/2008/10/24/322-the-claves-of-liechtenstein">internal political structure of
Lichtenstein</a>
on the <a class="reference external" href="http://strangemaps.wordpress.com">Strange Maps</a> blog, Brian
Hayes asks for the <a class="reference external" href="http://bit-player.org/2008/the-chromatic-number-of-liechtenstein">chromatic number of
Lichtenstein</a>.</p>
<p><em>Rahul pointed out that I made errors in transferring the map to a
graph. Specifically, I missed the borders Schellenberg-Eschen and
Vaduz-Triesen. The post below changes accordingly.</em></p>
<p>Warning: This post DOES contain spoilers to Brian's question. If you do
want to investigate it yourself, you'll need to stop reading now.
Apologies to those on my planet feeds.</p>
<p>As a first step, we need to build a graph out of it. I labeled each
region in turn with the exclaves numbered higher than the "main" region
of each organizational unit. And then I build a .dot file to capture
them all:</p>
<pre class="literal-block">
graph Lichtenstein {
Ruggell -- Schellenberg
Ruggell -- Gamprin1
Schellenberg -- Mauren
Schellenberg -- Eschen1
Mauren -- Eschen1
Gamprin1 -- Eschen2
Gamprin1 -- Vaduz2
Gamprin1 -- Schaan1
Gamprin1 -- Planken3
Gamprin1 -- Eschen1
Eschen1 -- Gamprin2
Eschen1 -- Planken1
Eschen2 -- Schaan1
Vaduz3 -- Schaan1
Vaduz2 -- Schaan1
Planken3 -- Schaan1
Planken2 -- Schaan1
Schaan1 -- Planken1
Schaan1 -- Planken4
Schaan1 -- Vaduz1
Gamprin2 -- Eschen3
Eschen3 -- Vaduz4
Eschen3 -- Schaan2
Vaduz4 -- Schaan2
Vaduz4 -- Planken1
Schaan2 -- Planken1
Planken1 -- Schaan3
Vaduz1 -- Triesenberg1
Vaduz1 -- Triesen
Planken4 -- Triesenberg1
Planken4 -- Balzers2
Balzers2 -- Vaduz5
Balzers2 -- Schaan4
Vaduz5 -- Schaan4
Schaan4 -- Triesenberg1
Schaan4 -- Vaduz6
Schaan4 -- Triesenberg2
Triesenberg1 -- Vaduz6
Triesenberg1 -- Triesen
Triesenberg1 -- Balzers3
Triesen -- Balzers3
Triesen -- Balzers1
Triesen -- Schaan5
Vaduz6 -- Schaan5
Triesenberg2 -- Schaan5
}
</pre>
<div class="line-block">
<div class="line">Compiling this, we get the graph</div>
<div class="line-block">
<div class="line"><img alt="image0" src="http://blog.mikael.johanssons.org/wp-content/liechtenstein-claves.png" /></div>
</div>
</div>
<div class="line-block">
<div class="line">Now, with this it is an easy job to get the adjacency graph for the
communes. We can just delete all numbers in the above dot file.
Graphing that, we get instead</div>
<div class="line-block">
<div class="line"><img alt="image1" src="http://blog.mikael.johanssons.org/wp-content/liechtenstein-compacted.png" /></div>
</div>
</div>
<div class="line-block">
<div class="line">Note, however, that the massive duplication of adjacency edges makes
this rather hard to read. So we can just amalgamate all those edges
into one single edge each. So in the end, the Liechtenstein graph
turns out to be</div>
<div class="line-block">
<div class="line"><img alt="image2" src="http://blog.mikael.johanssons.org/wp-content/liechtenstein-sorted.png" /></div>
</div>
</div>
<p>Now, as Brian points out, there is a 5-clique in this map, given by
Schaan, Balzers, Vaduz, Planken, Triesenberg.</p>
<p><em>Edited</em>: Michael Lugo pointed out in the comments that my exclusion
criterion for 6-cliques is obviously and trivially false. Discussion
below here changed appropriately</p>
<div class="line-block">
<div class="line">If there were a 6-clique in the map, there would have to be 6 communes
each of which have at least 5 edges connected to them. We can
construct the table of degrees for all communes, to see whether this
helps:</div>
<div class="line-block">
<div class="line">Balzers: 5</div>
<div class="line">Eschen: 6</div>
<div class="line">Gamprin: 5</div>
<div class="line">Mauren: 2</div>
<div class="line">Planken: 6</div>
<div class="line">Ruggell: 2</div>
<div class="line">Schaan: 7</div>
<div class="line">Schellenberg: 3</div>
<div class="line">Triesen: 4</div>
<div class="line">Triesenberg: 5</div>
<div class="line">Vaduz: 7</div>
</div>
</div>
<p>So, we have enough edges in Balzers, Eschen, Gamprin, Planken, Schaan,
Triesenberg and Vaduz. This gives us 7 candidates for a 6-clique. Thus
if two of these have too many edges outside this group, we know that
there cannot be a 6-clique in the commune-graph.</p>
<p>Now note that Triesenberg borders to Triesen, so at most 4 of those 5
edges can be within a 6-clique. Also, Gamprin borders to Ruggell, so at
most 4 of Gamprin's 5 edges can be within a 6-clique. Thus the graph
contains no 6-cliques.</p>
<div class="line-block">
<div class="line">Does this, though, mean that we can construct a 5-coloring of the
graph? Try this on:</div>
<div class="line-block">
<div class="line"><img alt="image3" src="http://blog.mikael.johanssons.org/wp-content/liechtenstein-colored.png" /></div>
</div>
</div>
<p>I would recolor the Liechtenstein map using this color choice, but I
haven't gotten around to installing a decent picture editing program
just yet. That'll have to wait.</p>
Cup products in simplicial cohomology2008-09-12T01:12:00+02:00Michitag:blog.mikael.johanssons.org,2008-09-12:cup-products-in-simplicial-cohomology.html<p>This post is a walkthrough through a computation I just did - and one of
the main reasons I post it is for you to find and tell me what I've done
wrong. I have a nagging feeling that the cup product just plain doesn't
work the way I tried to make it work, and since I'm trying to understand
cup products, I'd appreciate any help anyone has.</p>
<p>I've picked out the examples I have in order to have two spaces with the
same Betti numbers, but with different cohomological ring structure.</p>
<div class="section" id="sphere-with-two-handles">
<h2>Sphere with two handles</h2>
<p>I choose a triangulation of the sphere with two handles given the
boundary of a tetrahedron spanned by the nodes a,b,c,d and the edges be,
ef, bf and cg, ch, gh spanning two triangles.</p>
<div class="line-block">
<div class="line">We get a cochain complex on the form</div>
<div class="line-block">
<div class="line">[tex]0 \to \mathbb{Z}^8 \to \mathbb{Z}^{12} \to \mathbb{Z}^4
\to 0[/tex]</div>
<div class="line">with the codifferential given as</div>
<div class="line">[tex]</div>
<div class="line">\begin{pmatrix}</div>
<div class="line">1 & -1 & 0 & 0 & 0 & 0 & 0 & 0\\</div>
<div class="line">1 & 0 & -1 & 0 & 0 & 0 & 0 & 0\\</div>
<div class="line">1 & 0 & 0 & -1 & 0 & 0 & 0 & 0\\</div>
<div class="line">0 & 1 & -1 & 0 & 0 & 0 & 0 & 0\\</div>
<div class="line">0 & 1 & 0 & -1 & 0 & 0 & 0 & 0\\</div>
<div class="line">0 & 1 & 0 & 0 & -1 & 0 & 0 & 0\\</div>
<div class="line">0 & 1 & 0 & 0 & 0 & -1 & 0 & 0\\</div>
<div class="line">0 & 0 & 1 & -1 & 0 & 0 & 0 & 0\\</div>
<div class="line">0 & 0 & 1 & 0 & 0 & 0 & -1 & 0\\</div>
<div class="line">0 & 0 & 1 & 0 & 0 & 0 & 0 & -1\\</div>
<div class="line">0 & 0 & 0 & 0 & 1 & -1 & 0 & 0\\</div>
<div class="line">0 & 0 & 0 & 0 & 0 & 0 & 1 & -1\\</div>
<div class="line">\end{pmatrix}</div>
<div class="line">[/tex]</div>
<div class="line">and</div>
<div class="line">[tex]</div>
<div class="line">\begin{pmatrix}</div>
<div class="line">1 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\</div>
<div class="line">1 & 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\</div>
<div class="line">0 & 1 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\</div>
<div class="line">0 & 0 & 0 & 1 & -1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\</div>
<div class="line">\end{pmatrix}</div>
<div class="line">[/tex]</div>
</div>
</div>
<p>Computing nullspaces and images, we get a one-dimensional
[tex]H^0[/tex], generated by
[tex]a^*+b^*+c^*+d^*+e^*+f^*+g^*+h^*[/tex], a two-dimensional
[tex]H^1[/tex] with one basis given by [tex]ef^*[/tex] and
[tex]gh^*[/tex], and a one-dimensional [tex]H^2[/tex] with the linear
dual of any of the four 2-cells as an acceptable basis choice, for
instance [tex]bcd^*[/tex]. Throughout, given a simplex
[tex]\sigma[/tex] I write [tex]\sigma^*[/tex] for the functional
[tex]C_*(X)\to\mathbb Z[/tex] that takes [tex]\sigma\mapsto
1[/tex] and [tex]\tau\mapsto 0[/tex] for all other simplices
[tex]\tau[/tex].</p>
<div class="line-block">
<div class="line">Now, the Encyclopedia of Topology, volume II has a paper by Viro and
Fuchs on homology and cohomology. They state a direct construction of
the cup product as defined by the following:</div>
<div class="line-block">
<div class="line">[tex][c_1\cup c_2](\sigma) = c_1(\sigma_{0\ldots
q_1})c_2(\sigma_{q_1\ldots q_1+q_2})[/tex] where the q:s are
the degrees of the c:s. They also state that (at least if X is
connected) any degree 0 coclass acts as identity under the cup
product.</div>
</div>
</div>
<p>So, any product of something in [tex]H^0[/tex] with anything else is
already clear. And any product of a degree 1 or 2 coclass with a degree
2 coclass will have to vanish - since there isn't anything for it to
possibly hit. Remains the three possible products of two coclasses both
of degree 1.</p>
<div class="line-block">
<div class="line">Now, by the definition of the coclasses, we need something that is an
equivalence class of linear duals of 2-cells to split into things that
only operate on the two handles. However, these are geometrically
disjoint - so any cell we could feed into such a product would vanish
on the components. For instance,</div>
<div class="line-block">
<div class="line">[tex](ef^*)\cup(ef^*)(bcd)=(ef^*)(bc)\cdot(ef^*)(cd)=0[/tex]</div>
</div>
</div>
<p>So all the higher degree cohomology products have to vanish.</p>
</div>
<div class="section" id="torus">
<h2>Torus</h2>
<div class="line-block">
<div class="line">We pick a triangulation of the torus with 9 vertices, 27 one-cells and
18 2-cells, given as the identification space of a square, in a usual
manner. It's going to be obnoxious to write down boundary maps, and
list all cells, so I'll just refer you to the following picture
instead:</div>
<div class="line-block">
<div class="line"><img alt="Torus triangulation" src="http://blog.mikael.johanssons.org/wp-content/torus.png" /></div>
</div>
</div>
<p>Now, setting up the same computations to get the cohomology classes, we
arrive at a one-dimensional class in degree zero, represented by the sum
of all duals of all vertices, two classes in degree one, with
representatives given, for instance, by
[tex]bc^*-ce^*-ci^*-di^*+ef^*-fh^*-fi^*-gi^*[/tex] and
[tex]dg^*+di^*+eg^*+eh^*+fh^*+fi^*[/tex], and one class in degree
two, represented by the dual of any 2-cell - for instance
[tex]fhi^*[/tex].</p>
<div class="line-block">
<div class="line">Now is where I find things get tricky. Again, we get the 0-degree
class acting as, essentially, an identity. And the only products that
could possibly be nontrivial now would be products of two classes of
degree 1. So let's call the classes [tex][u][/tex] and [tex][v][/tex],
and lets call the degree 2 class [tex][w][/tex].</div>
<div class="line-block">
<div class="line">Using the Viro-Fuchs construction of the cup product, I should be
able to say that if we consider [tex][u]\cup[u](bce) = u(bc)\cdot
u(ce) = 1\cdot(-1) = -1[/tex] and therefore
[tex][u]\cup[v]=[w][/tex] (there is some reference to the
codifferential matrices hidden in the signs here), and similarily, by
considering the simplex idg (= dgi with a cyclic permutation) twice we
would get [tex][u]\cup[v]=[v]\cup[v]=[w][/tex].</div>
</div>
</div>
<p>This, somehow, feels fishy. Could anyone check my reasoning for me,
please?</p>
<div class="line-block">
<div class="line"><em>Edited to add:</em> indeed this was fishy. The kind of blatant reordering
I did to compute with dgi isn't permissible. Also, we know that the
square of a degree 1 coclass has to vanish, since the cohomology ring
is graded commutative. However, we're not far from the truth: If we
want to compute [tex][u]\cup[v][/tex], we should look at all pairs of
coclasses in the expression for each, and look for pairs that
"connect" to form a valid 2-cell. So, from u we can immediately
discard bc, ci, di, fh, fi and gi, since nothing in v starts with c,h
or i. Remains to check which of the cells ce and ef connect to
anything at all. We get, in the end, the pairings</div>
<div class="line-block">
<div class="line">ce,eg</div>
<div class="line">ce,eh</div>
<div class="line">ef,fh</div>
<div class="line">ef,fi</div>
<div class="line">corresponding to the resulting potential 2-coclasses
[tex]ceg^*[/tex], [tex]ceh^*[/tex], [tex]efh^*[/tex] and
[tex]efi^*[/tex]. However, looking over our sketch again, we see that
the cells ceg, ceh and efi are absent from our triangulation of the
torus. Hence, the only potential 2-coclass representative surviving is
[tex]efh^*[/tex], which represents the actual cup product, yielding
[tex][u]\cup[v]=[efh^*][/tex]</div>
</div>
</div>
</div>
R and topological data analysis2008-08-23T16:38:00+02:00Michitag:blog.mikael.johanssons.org,2008-08-23:r-and-topological-data-analysis.html<p>This is extremely early playing around. It touches on things I'm going
to be working with in Stanford, but at this point, I'm not even up on
toy level.</p>
<p>We'll start by generating a dataset. Essentially, I'll take the
trefolium, sample points on the curve, and then perturb each point ever
so slightly.</p>
<pre class="literal-block">
idx <- 1:2000
theta <- idx*2*pi/2000
a <- cos(3*theta)
x <- a*cos(theta)
y <- a*sin(theta)
xper <- rnorm(2000)
yper <- rnorm
xd <- x + xper/100
yd <- y + yper/100
cd <- cbind(xd,yd)
</pre>
<div class="line-block">
<div class="line">As a result, we get a dataset that looks like this:</div>
<div class="line-block">
<div class="line"><img alt="Trifolium data" src="http://blog.mikael.johanssons.org/wp-content/uploads/2008/08/trifolium_data.png" /></div>
</div>
</div>
<p>So, let's pick a sample from the dataset. What I'd really want to do now
would be to do the witness complex construction, but I haven't figured
enough out about how R ticks to do quite that. So we'll pick a sample
and then build the 1-skeleton of the Rips-Vietoris complex using
Euclidean distance between points. This means, we'll draw a graph on the
dataset with an edge between two sample points whenever they are within
ε from each other.</p>
<p>So we pick a sample from this sample. Every 31 points might be good.
(number arrived at by guessing wildly, and drawing the resulting images
until they looked pretty enough)</p>
<pre class="literal-block">
csamp <- cd[seq(1,dim(csamp)[1],31),]
</pre>
<div class="line-block">
<div class="line">We'd get, from this, the following result:</div>
<div class="line-block">
<div class="line"><img alt="Trifolium sample" src="http://blog.mikael.johanssons.org/wp-content/uploads/2008/08/trifolium_sample.png" /></div>
</div>
</div>
<p>Now, we'll want to build the corresponding skeleton. Let's do it for a
few different εs to demonstrate the difference.</p>
<pre class="literal-block">
d <- function(x,y,z,w) { sqrt((x-z)^2+(y-w)^2) }
par(mfrow=c(2,2))
eps <- c(0.05,0.1,0.15,0.2)
cols <- c("cyan","green","yellow","red")
for (ei in 1:length(eps)) {
plot(cd,col="gray")
points(csamp,col="blue")
title(eps[ei])
for (i in 1:dim(csamp)[1]) {
for (j in 1:dim(csamp)[1]) {
x <- csamp[i,1]; y <- csamp[i,2]
z <- csamp[j,1]; w <- csamp[j,2]
e <- eps[ei]
if(d(x,y,z,w) < 2*e) {
segments(x,y,z,w,col=cols[ei])
} else {
}
}
}
}
</pre>
<div class="line-block">
<div class="line">The result is:</div>
<div class="line-block">
<div class="line"><img alt="Trifolium skeleta" src="http://blog.mikael.johanssons.org/wp-content/uploads/2008/08/trifolium_skeleta.png" /></div>
</div>
</div>
<p>We notice that as the radius we observe grows, we connect all the loops,
but by the time the loops are completely connected, there are also cross
connections forming toward the middle. However, with any luck, these
cross connections will be so short-lived, in terms of the radii we use,
so that the homology classes we extract from the Rips-Vietoris complexes
are noticably more persistent.</p>
<p>Doing the corresponding computation relies on me figuring enough out to
use the Plex software suite, or writing my own, and thus will be subject
of a much later blog post. This one was mainly "Look - I use R" and
"Look - pretty pictures". Enjoy.</p>
RIP Henri Cartan2008-08-22T18:29:00+02:00Michitag:blog.mikael.johanssons.org,2008-08-22:rip-henri-cartan.html<p>As <a class="reference external" href="http://unapologetic.wordpress.com/2008/08/22/he-was-still-alive/">John
Armstrong</a>
said, I didn't know he was still alive!</p>
<p>On the algebraic topology mailing list, the announcement came today that
Henri Cartan, once co-founder of Nicholas Bourbaki, died August 13,
2008:</p>
<blockquote>
La Soci</blockquote>
The end of the line2008-08-04T18:15:00+02:00Michitag:blog.mikael.johanssons.org,2008-08-04:the-end-of-the-line.html<p>And a new beginning.</p>
<p>We seem to be a whole crowd finishing our PhDs all at the same time:
<a class="reference external" href="http://pozorvlak.livejournal.com">pozorvlak</a>,
<a class="reference external" href="http://gooseania.blogspot.com">Gooseania</a> and I. While my blog
<em>started</em> as inspired by Gooseania, I won't close it just because I'm
done. I'll continue blogging my Postdoc years, and hopefully all the way
through my academic career.</p>
Junior Mathematical Congress 20082008-07-30T16:59:00+02:00Michitag:blog.mikael.johanssons.org,2008-07-30:junior-mathematical-congress-2008.html<p><a class="reference external" href="http://jmc2008.org">Is one reason for my current inactivity. It seems to be shaping up to a
REALLY GOOD event.</a></p>
Blackbox computing of A-infinity algebras2008-07-25T18:18:00+02:00Michitag:blog.mikael.johanssons.org,2008-07-25:blackbox-computing-of-a-infinity-algebras.html<p>The last of my thesis results has reached article form. The paper
<a class="reference external" href="http://arxiv.org/abs/0807.3869">Blackbox computation of A-infinity
algebras</a> has now hit the arXiv and
been submitted to the Kadeishvili Festschrift issue of the Georgian
Mathematics Journal.</p>
Dr rer nat, Magna cum laude2008-07-17T14:43:00+02:00Michitag:blog.mikael.johanssons.org,2008-07-17:dr-rer-nat-magna-cum-laude.html<p>After about 5 semesters, <a class="reference external" href="http://www.emis.de/journals/JHRS/volumes/2008/volume3-1.htm">one
paper</a>,
one erratum (submitted to JHRS) and <a class="reference external" href="http://www.minet.uni-jena.de/~mik/thesis.pdf">one
thesis</a>, and after
taking two oral exams and delivering one 30 minute talk on my research,
I am now (modulo the week or two it takes to produce my certificate)
entitled to the title of <em>doctor rerum naturalium</em>.</p>
<p>Next stop is the topology in computer science workgroup at Stanford,
where I have accepted an offer for a postdoc research position up to 3
years (conditional on my good behaviour :-).</p>
A vision for collaborative mathematics platforms2008-06-19T08:53:00+02:00Michitag:blog.mikael.johanssons.org,2008-06-19:a-vision-for-collaborative-mathematics-platforms.html<p>Based on the <a class="reference external" href="http://sbseminar.wordpress.com/2008/06/16/request-long-distance-collaboration/#comments">extensive
discussion</a>
at the Secret Blogging Seminar on <a class="reference external" href="http://sbseminar.wordpress.com/2008/06/16/request-long-distance-collaboration/">tools for long-distance
collaborations</a>,
Scott Morrison writes an <a class="reference external" href="http://sbseminar.wordpress.com/2008/06/18/subverting-the-system/">introduction to source control with subversion
for research
collaborators</a>.</p>
<p>In this post, Scott also offers, quite magnanimously, to setup and host
subversion repositories for any mathematician who happens to want to
start collaborating using subversion.</p>
<p>Which, to my mind, immediately prompts the question: why stop there?
I've had ideas about setting up a free and easy to use platform for
modern communication in the mathematical community before; but they were
along the lines of duplicating
<a class="reference external" href="http://wordpress.com">wordpress.com</a>'s efforts; which isn't really
something that pays off on your efforts. Reading this, though, raised a
new idea.</p>
<p>Why not setup a server - preferably with a university data center as
backing - which dispenses free platforms with the following contents:</p>
<ul class="simple">
<li>Source control. Preferably option on subversion, git, mercurial - or
some such selection of modern and wide-spread systems.</li>
<li>Wiki, Blog, Ticket system - a wordpress installation, with LaTeX, and
a Trac installation connected to the source control system in use
would do quite well.</li>
<li>Heavy access control: one of the worries I hear pronounced often is
that running a blog with your mathematical ideas display the ideas to
the world before you get to publish them, thus risking you getting
scooped on your research. This worry would be, to some extent,
ameliorated by serving things optionally over https, by having a
decent and robust access control system, and by having draconian
privacy statements for the site administration.</li>
<li>LaTeX compile farm - why not set this thing up so that it can build
your papers for you? That way we really end up building the
mathematician's <a class="reference external" href="http://sourceforge.net">sourceforge</a>!</li>
</ul>
<p>So, I guess this post is a call for volunteer implementers. I'll launch
the ideas in the fora I have access to - I'm headed for a faculty
retreat tomorrow discussing a research project which seems to include
Web2.0 for research communication as a topic, and will see if I can
propose the ideas there; and it's just the thing to discuss with the
workgroup <em>Information und Kommunikation</em> of the DMV too. But just
because I'm looking for people who want to run this in no way implies
anyone reading this shouldn't.</p>
FRA-Lagen och falska positiver2008-06-18T09:56:00+02:00Michitag:blog.mikael.johanssons.org,2008-06-18:fra-lagen-och-falska-positiver.html<p>Jag har varit en god medborgare. Jag har b</p>
On purity and essence of mathematics2008-06-15T12:06:00+02:00Michitag:blog.mikael.johanssons.org,2008-06-15:on-purity-and-essence-of-mathematics.html<p>I seem, lately, to be so densely planned that all I can do for my blog
is to react on blog posts from Ben Webster at the <a class="reference external" href="http://sbseminar.wordpress.com">Secret Blogging
Seminar</a>.</p>
<p>He has, recently, written <a class="reference external" href="http://sbseminar.wordpress.com/2008/06/14/what-is-purity/">a
post</a>
inspired by the <a class="reference external" href="http://xkcd.com">xkcd</a> comic on <a class="reference external" href="http://xkcd.com/435/">purity in the
sciences</a>. The comic is funny, and rings true,
but Ben brings up a severe criticism of the premises of the comic that
rings back to my own years as a hotheaded undergraduate.</p>
<p>You should read all of Ben's post, but if you don't, you should at least
read the following:</p>
<blockquote>
And I think one of the key points here is this: mathematics is not
science. Mathematics is often lumped in with science, and is often
used by scientists. Mathematicians often know more science than
normal people, and certainly scientists know more mathematics. But
mathematics and science are fundamentally different activities, as
different as making a gun and fighting in a battle. I mean, no one
would claim there are no links between those occupations, or that
gun-makers don</blockquote>
AMS and mathjobs.org are made of awesome2008-06-11T11:36:00+02:00Michitag:blog.mikael.johanssons.org,2008-06-11:ams-and-mathjobsorg-are-made-of-awesome.html<p>I like the <a class="reference external" href="http://mathjobs.org">Mathjobs</a> website that AMS are
running. It's a good source for math jobs, and seems to have just the
right selection for me to get interesting stuff out of reading it.</p>
<p>Now, in a post just a day or two ago, Ben Webster of the Secret Blogging
Seminar <a class="reference external" href="http://sbseminar.wordpress.com/2008/06/09/an-open-letter-to-mathjobsorg/">called for RSS feeds for the Mathjobs
listings.</a></p>
<p>Imagine my surprise - and probably that of most the readers of the
Secret Blogging seminar - to see, the day after posting, the following
reply from Diane Boumenot at the AMS:</p>
<blockquote>
<p>Hello all. First of all let me say, thank you for the kind words.
Also, if you want to send suggestions to Mathjobs.Org, that can be
easily done through the web site. However, thanks to Google Alerts
and a willing programmer, your request has been received and acted
on. As of this morning you can get an RSS feed through the View Jobs
page of the Mathjobs website.</p>
<p>Thanks for the suggestion. Thoughts/ideas are always welcome. I will
pass the one about Current Publications along to the publications
division.</p>
</blockquote>
Restarting high school topology2008-05-21T17:41:00+02:00Michitag:blog.mikael.johanssons.org,2008-05-21:restarting-high-school-topology.html<p>My two high-school kids came by today. We've been trying to get a new
teaching session together since early February, but they had a hell of a
time all through February, and all our appointments ended up canceled
with little or no notice; and then I spent March and April on tour.</p>
<p>We pressed on with knot theory. Today, we discussed knot sums, prime
knots, knot tabulation, behavior of the one invariant (n-colorability)
we know so far under knot sums, Dowker codes, and we got started on
Conway codes for knots. Next week, I plan for us to finish up talking
about the Conway knot notation, get the connection between rational
knots and continued fractions down pat, and start looking into new
invariants.</p>
<p>Anyone have a favorite invariant that you'd like me to talk about? I'm
hoping (in my wildest most bizarre dreams) to get around to the
Alexander polynomial and possibly even talk about Khovanov homology, but
that depends a LOT on whether they're prepared to continue through their
summer holidays or not - and even then I doubt we'll make it up to
Khovanov.</p>
Parallel and cluster computing with MPI4Py2008-05-18T11:46:00+02:00Michitag:blog.mikael.johanssons.org,2008-05-18:parallell-and-cluster-mpi4py.html<p>First off, I'd ask your pardon for the lull in postings - this spring
has been insane. It has been very much fun - traveling the world,
talking about my research and meeting people I only knew electronically
- and also very intense.</p>
<p>To break the lull, I thought I'd try to pick up what I did last summer:
parallel computing on clusters. It's been a bit of <a class="reference external" href="http://vnoel.wordpress.com/2008/05/03/bye-matlab-hello-python-thanks-sage/">blog
chatter</a>
about <a class="reference external" href="http://www.sagemath.org">SAGE</a> and how SAGE suddenly has
transformed from a Really Good Idea to something that starts to corner
out most other systems in usability and flexibility.</p>
<div class="line-block">
<div class="line">Matlab? SciPy bundled with SAGE and the Python integration seems to be
at least as good, if not better.</div>
<div class="line-block">
<div class="line">Maple? Mathematica? Maxima? Singular? GAP? SAGE interfaces with all
those that it doesn't emulate.</div>
</div>
</div>
<p>And also, it allows you to install and interface with almost anything
that has a Python module written. Specifically, a SAGE installation
comes complete with OpenMPI and MPI4Py, allowing for multi-processor
parallelity - either on an SMP machine, or on a cluster that runs some
sort of MPI system. So, using the Python and
<a class="reference external" href="http://mpi4py.scipy.org/">MPI4Py</a> that arrives with SAGE, it should
be possible to start experimenting with parallel programming; and the
rest of the SAGE integration should make it easy to start playing with,
say, parallel algorithms for commutative or homological algebra.</p>
<p>But as a first step, I thought I'd do a "Hello world" for mathematics.
Computing the <a class="reference external" href="http://en.wikipedia.org/wiki/Mandelbrot_set">Mandelbrot
fractal</a>.</p>
<p>Now, a Mandelbrot fractal is reasonably easy to compute: we step through
all pixels in our image, and we repeat the transformation [tex]z_{n+1}
= z_n^2+c[/tex] for c the complex plane point corresponding to the
pixel, and [tex]z_0=0[/tex]. We see whether this iteration diverges
within a limited amount of steps.</p>
<p>I'm going to, for benchmarking reasons, fix some parameters. I'll have a
600x600 picture, spanning the area [-2.5,1.5]x[-2.0,2.0] of
[tex]\mathbb R^2 = \mathbb C[/tex]. I'll allow 80 iterations, and
after that, I want to see whether the square of the magnitude is larger
than 4.0.</p>
<p>At the core of the iteration lies the function</p>
<pre class="literal-block">
def step(z,c):
return z**2+c
</pre>
<p>and for a specific point [tex]c=x+yi[/tex], we can compute the number of
iterations needed with the function</p>
<pre class="literal-block">
def point(c,n,d2):
zo = 0.0
zn = zo
i = 0
while abs(zn)**2 < d2 and i
Repeating this for each point in the span we look at, modified for the granularity our pixels force, and choosing a color for each iteration number, we'll get the familiar Mandelbrot fractal. I choose to assign colors using the hue in the HSV color space. This gives us rich, nice colors with a continuous and neat progression through them. So, given the number of iterations at a point, and the maximal number of iterations allowed, we compute the corresponding color by
def colnorm((r,g,b)):
return (int(255*r)-1,int(255*g)-1,int(255*b)-1)
def col(n,max):
if n == max:
return (0,0,0)
return colnorm(colorsys.hsv_to_rgb(1.0-float(n)/max,1.0,1.0))
</pre>
<p>the function <tt class="docutils literal">colnorm</tt> is in there to convert RGB triples with values
in [0,1] to RGB triples with values in [0,255].</p>
<p>And from here it's just a matter of interfacing with the right kind of
image writing library - I chose the <a class="reference external" href="http://pythonware.com/products/pil/">PIL
library</a> - and then go through
all pixels, computing the color implied at each pixel, and placing it in
the image.</p>
<p>Run like this, the benchmark fractal takes some 12 seconds on our
workgroup's workhorse, and 8 seconds on a single node of the Jena
Beowulf cluster.</p>
<div class="line-block">
<div class="line">The resulting fractal is</div>
<div class="line-block">
<div class="line"><img alt="Mandelbrot fractal" src="http://blog.mikael.johanssons.org/wp-content/mandel.png" /></div>
</div>
</div>
<p>As for parallelization, the Mandelbrot fractal is a typical example of
what's called an embarrassingly parallel problem: each pixel in an image
is completely independent from all other pixels, and so we could just
let one single processor loose on each pixel, and get things done a lot
faster than if we'd work through the pixels one after the other.</p>
<p>So, to parallelize it, we'll need some way of distributing the work
among the processors, and some way of gathering the pieces up once we're
done. In <a class="reference external" href="http://mpi4py.scipy.org/">MPI4Py</a>, initialization and
finalization of the MPI library is taken care of under the hood, in the
import of the library and in the closing of the program.</p>
<p>My first idea was to use the builtin Scatter and Gather functions. The
idea was something like the following:</p>
<pre class="literal-block">
def main():
comm = MPI.COMM_WORLD
rowids = range(h)
ids = comm.Scatter(rowids)
pixs = compute_mandel(ids)
pixels = comm.Gather(pixs)
img = generate_image(pixels)
img.save("filename.png")
</pre>
<p>but this fails already at the Scatter line. The thing is that you can't
just hand Scatter a list of things you want distributed, you'll need to
feed comm.Scatter with a list of lists, one for each recipient. This
particular property of comm.Scatter is not actually described anywhere I
could find on the web - the documentation for MPI4Py is minuscule at
best, and this is specific to the Python interface setup, so the MPI
standard documentation doesn't give it away either.</p>
<p>The next problem that appears is that if we just divide the rows of the
picture evenly - first processor gets rows 1,2,3,...,n; second gets
n+1,...,2n and so on - then some processor will get the bulk of the
actual Mandelbrot set - which is by definition the pixels that take the
most iterations to compute - so the speedup by pouring more processing
power on the problem won't be as impressive as it could be. The
processors that take care of regions outside the set will finish quickly
and then just idle, and the processors that take care of most of the set
will churn through as if they had computed the set alone.</p>
<p>There are very many ways to deal with this issue. The approach I'm
taking is to use an interlacing pattern to distribute the rows. With n
processors, each processor computes every n rows of the picture, thus
getting easy and difficult rows reasonably evenly distributed. Also,
with this easy a distribution scheme, there's no reason to compute the
rows centrally and distributing them with the MPI communication scheme.</p>
<p>Hence, the complete program ends up with the following source code:</p>
<pre class="literal-block">
from mpi4py import MPI
import Image
import colorsys
from math import ceil
w = 600
h = 600
its = 80
d2 = 4.0
xmax = 1.5
xmin = -2.5
ymax = 2.0
ymin = -2.0
def step(z,c):
return z**2+c
def point(c,n,d2):
zo = 0.0
zn = zo
i = 0
while abs(zn)**2 < d2 and i
We can run this program on a single-processor machine by issuing
$SAGE_ROOT/local/bin/mpirun -np 1 $SAGE_ROOT/local/bin/python mandel.py
</pre>
<p>and on a multiprocessor machine by replacing <tt class="docutils literal"><span class="pre">-np</span> 1</tt> with the number
of processors we'd want to utilize.</p>
<p>On the quad kernel workhorse my workgroup uses, which currently has two
processors fully utilized by group cohomology computations, I get the
following results (approximates):</p>
<table border="1" class="docutils">
<colgroup>
<col width="26%" />
<col width="45%" />
<col width="29%" />
</colgroup>
<thead valign="bottom">
<tr><th class="head"># proc</th>
<th class="head">mean walltime</th>
<th class="head">std.dev</th>
</tr>
</thead>
<tbody valign="top">
<tr><td>1</td>
<td>12s</td>
<td>0.7</td>
</tr>
<tr><td>2</td>
<td>9s</td>
<td>0.7</td>
</tr>
<tr><td>3</td>
<td>11s</td>
<td>2</td>
</tr>
</tbody>
</table>
<p>So - once I used up the free processors, the wall time shoots up, and
the spread shoots up. So working on a utilized SMP machine might not be
the way to go.</p>
<p>But I do have access to the Jena Beowulf cluster, since the lecture
course I audited on cluster computing. And I'm getting much better
results there. Over there, I run jobs by writing a job script</p>
<pre class="literal-block">
#$ -j y
#$-l hostshare=1.0
#$-q fast.q
mpiexec python mandel.py
</pre>
<p>and submit it to the job scheduler, to work on - say - 3 processors with</p>
<pre class="literal-block">
qsub -pe lam7 3 mandel.job
</pre>
<p>I added commands to time the computation part - stripping out the
book-keeping and image writing once the fractal is computed, and thus
can get timings for the parallel computation.</p>
<p>As a result, I get the following:</p>
<table border="1" class="docutils">
<colgroup>
<col width="26%" />
<col width="45%" />
<col width="29%" />
</colgroup>
<thead valign="bottom">
<tr><th class="head"># proc</th>
<th class="head">mean walltime</th>
<th class="head">std.dev</th>
</tr>
</thead>
<tbody valign="top">
<tr><td>1</td>
<td>8.33</td>
<td><ul class="first last simple">
<li></li>
</ul>
</td>
</tr>
<tr><td>2</td>
<td>4.34</td>
<td>0.020</td>
</tr>
<tr><td>3</td>
<td>2.96</td>
<td>0.033</td>
</tr>
<tr><td>4</td>
<td>2.32</td>
<td>0.031</td>
</tr>
<tr><td>5</td>
<td>2.00</td>
<td>0.060</td>
</tr>
<tr><td>6</td>
<td>1.54</td>
<td>0.019</td>
</tr>
<tr><td>7</td>
<td>1.29</td>
<td>0.040</td>
</tr>
<tr><td>8</td>
<td>1.22</td>
<td>0.028</td>
</tr>
<tr><td>9</td>
<td>1.04</td>
<td>0.017</td>
</tr>
<tr><td>10</td>
<td>1.00</td>
<td>0.030</td>
</tr>
</tbody>
</table>
<div class="line-block">
<div class="line"><img alt="image1" src="http://chart.apis.google.com/chart?chs=250x100&cht=lc&chd=t:83.3,43.3,29.5,23.2,20.0,15.4,12.9,12.2,10.4,10.0" /></div>
<div class="line-block">
<div class="line">Walltimes</div>
</div>
</div>
Tour dates2008-03-06T22:57:00+01:00Michitag:blog.mikael.johanssons.org,2008-03-06:tour-dates.html<p><em>Edited to add Galway</em></p>
<p>I'll be doing a "US tour" in March / April. For the people who might be
interested - here are my whereabouts, and my speaking engagements.</p>
<p>I'm booked at several different seminars to do the following:</p>
<div class="line-block">
<div class="line">Title: On the computation of A-infinity algebras and Ext-algebras</div>
<div class="line-block">
<div class="line">Abstract:</div>
</div>
</div>
<blockquote>
<p>For a ring R, the Ext algebra [tex]Ext_R^*(k,k)[/tex] carries rich
information about the ring and its module category. The algebra
[tex]Ext_R^*(k,k)[/tex] is a finitely presented k-algebra for most
nice enough rings. Computation of this ring is done by constructing
a projective resolution P of k and either constructing the complex
[tex]Hom(P_n,k)[/tex] or equivalently constructing the complex
[tex]Hom(P,P)[/tex]. By diligent choice of computational route, the
computation can be framed as essentially computing the homology of
the differential graded algebra [tex]Hom(P,P)[/tex].</p>
</p><div class="line-block">
<div class="line">Being the homology of a dg-algebra, [tex]Ext_R^*(k,k)[/tex] has
an induced A-infinity structure. This structure, has been shown by
Keller and by Lu-Palmieri-Wu-Zhang, can be used to reconstruct R
from</div>
<div class="line-block">
<div class="line">[tex]Ext_R^{\leq 2}(k,k)[/tex].</div>
</div>
</div>
<p>In this talk, we shall discuss the computation of
[tex]Ext_R^*(k,k)[/tex] and methods for computing an A-infinity
structure on the Ext algebra. Examples will be drawn from group
cohomology, where the computation of the Ext algebra has conditions
from Benson and Carlson for recognizing whether a partial
computation has the entire structure.</p>
</blockquote>
<div class="line-block">
<div class="line">Talk dates are:</div>
<div class="line-block">
<div class="line">March:</div>
<div class="line">19 - Stockholm University</div>
<div class="line">26 - UPenn</div>
<div class="line">27 - Millersville</div>
<div class="line">29/30 - Graduate Student Topology Conference, UIUC</div>
<div class="line">April:</div>
<div class="line">8 - Stanford</div>
<div class="line">11 - U Washington, Seattle</div>
<div class="line">May:</div>
<div class="line">8 - NUI Galway</div>
</div>
</div>
<p>I will be in Millersville, with occasional visits to UPenn during March
25-28 and April 10-20. I'll be somewhere in Illinois most of March 28,
and at UIUC for the GSTC 29-30. I'll be at MSRI March 31-April 4, and
then at Stanford April 4-10.</p>
Introduction to Algebraic Geometry (3 in a series)2008-03-04T15:37:00+01:00Michitag:blog.mikael.johanssons.org,2008-03-04:introduction-to-algebraic-geometry-3-in-a-series.html<p>I'm going to move on with the identification of geometric objects with
functions from these objects down to a field soon enough, but I'd like
to spend a little time nailing down the categorical language of this
association. Basically, we have two functors I and V going back and
forth between two categories. And the essential statement of the last
post is that these two functors form an equivalence of categories.</p>
<p>Now, first off in this categorical language, I want to nail down exactly
what the objects are. In the category [tex]\mathcal{AV}ar_k[/tex] the
objects are solution sets of systems of polynomial equations. And in the
category [tex]\mathcal{RA}lg_k[/tex], the objects are finitely
presented Noetherian reduced k-algebras.</p>
<p>The functor [tex]V:\mathcal{RA}lg_k\to \mathcal{AV}ar_k[/tex] acts
on objects by sending an algebra R to the solution set of the polynomial
equations generating the ideal in a presentation of the algebra.</p>
<p>And the functor [tex]I:\mathcal{AV}ar_k\to \mathcal{RA}lg_k[/tex]
takes a variety to the algebra of polynomial functions on the variety.
This is a slight modification to the way I've introduced I in the
previous posts - but the good news is that I(V) is the quotient of the
right polynomial ring with the previously defined I(V).</p>
<div class="section" id="morphisms-on-varieties">
<h2>Morphisms on varieties</h2>
<p>In order to define a category, it's not enough with the objects - we
want morphisms as well. Since everything else is defined by polynomials,
we're going to define a morphism of varieties [tex]V\to W[/tex] to be a
map [tex]\mathbb A^n\to\mathbb A^m[/tex], polynomial in each
coordinate, such that the image of the restriction to the variety
[tex]V\subseteq\mathbb A^n[/tex] is contained in
[tex]W\subseteq\mathbb A^m[/tex].</p>
<p>In other words, it is a map
[tex](x_1,\dots,x_n)\mapsto(f_1(x_1,\dots,x_n),\dots,f_m(x_1,\dots,x_n))[/tex]
defined by polynomials [tex]f_1,\dots,f_m[/tex].</p>
<p>An isomorphism of varieties is exactly what we expect it to be - it is a
morphism that has an inverse.</p>
<p>One specific kind of highly interesting isomorphisms are the linear
automorphisms of [tex]\mathbb A^n[/tex]. These are given, essentially,
by invertible change-of-coordinate matrices in the way we are used to
from linear algebra.</p>
<div class="section" id="examples">
<h3>Examples</h3>
<p>Recall from the earlier posts the twisted cubic curve
[tex]V(x-y^2,x-z^3)[/tex]. Points on it have the form
[tex](s,s^2,s^3)[/tex] - and this, incidentially, gives us precisely a
map [tex]\mathbb A^1\to\mathbb A^3[/tex] displaying an isomorphism
between the twisted cubic curve and the affine line. The inverse is
given by [tex](x,y,z)\mapsto x[/tex].</p>
<p>Consider the parabola [tex]V(x-y^2)[/tex]. This is also isomorphic to
the affine line, over the maps [tex]t\mapsto(t,t^2)[/tex] and
[tex](x,y)\mapsto x[/tex].</p>
<p>On the other hand, the affine line is not isomorphic to the nodal curve
[tex]V(y^2-x^2-x^3)[/tex]. The easiest way to show this is to go over
smoothness of curves and singular points - which I hope to deal with
later at some point. Essentially, smoothness is an invariant of
varieties under isomorphisms, and since the point (0,0) is singular on
the nodal curve, and the affine line has no singular points, the two
varieties can not possibly be isomorphic.</p>
<p>Note that images of varieties need not be affine algebraic varieties -
they will, however, always be <em>quasi-projective</em> varieties. We'll see if
I get into this later on.</p>
</div>
</div>
<div class="section" id="morphisms-of-algebras-and-functoriality-of-v-and-i">
<h2>Morphisms of algebras and functoriality of V and I</h2>
<p>We really do already know what morphisms look like in
[tex]\mathcal{RA}lg_k[/tex]. This category is the full subcategory of
the category of k-algebras - by which we mean that it picks out objects
among k-algebras, and have all k-algebra maps between objects as
morphisms.</p>
<p>The really awesome bit happens when we start considering the morphisms
we've defined. Given a morphism [tex]F:V\to W[/tex], we define the
pullback [tex]F^\#:k[W]\to k[V][/tex] by [tex]F^\#(f)=f\circ
F[/tex]. This takes a map [tex]f:W\to k[/tex] and makes a map
[tex]F^\#(f):V\to k[/tex]. Since this is a composition of polynomials,
it is also a polynomial function. If [tex]f\in I(V)[/tex], then
[tex]F^\#(f)(p)=f(F(p))[/tex], and since [tex]F(p)\in V[/tex], it
follows that [tex]f(F(p))=0[/tex], and thus [tex]F^\#(f)\in
I(W)[/tex].</p>
<p>In the other direction, suppose that R and S are reduced finitely
generated k-algebras. Then [tex]R=k[x_1,\dots,x_r]/I[/tex], and
[tex]S=k[y_1,\dots,y_s]/J[/tex]. We fix a homomorphism
[tex]\sigma:R\to S[/tex], and we wish to construct a variety morphism
[tex]F[/tex] such that [tex]F^\#=\sigma[/tex].</p>
<div class="line-block">
<div class="line">Let [tex]F_i\in k[y_1,\dots,y_s][/tex] be a representative of
[tex]\sigma(x_i)[/tex], and define [tex]F:\mathbb A^s\to \mathbb
A^r[/tex] by [tex]a\mapsto(F_1(a),\dots,F_r(a))[/tex]. We need to
verify that F maps V to W. This follows if we can only show that for
every [tex]a\in V[/tex], all polynomials in I vanish on F(a). Let
[tex]g\in I[/tex]. Then</div>
<div class="line">[tex]g(F(a))=g(F_1(a),\dots,F_r(a))=g(\sigma(x_1)(a),\dots,\sigma(x_r)(a))=\sigma(g)(a)[/tex]</div>
<div class="line-block">
<div class="line">and since [tex]g\in I[/tex], it represents the zero class of
[tex]k[x_1,\dots,x_r]/I[/tex], so [tex]\sigma(g)=0[/tex] and hence
[tex]\sigma(g)\in J[/tex]. But J is the ideal of all functions
vanishing on V. Hence [tex]F(a)\in W[/tex], and the proof is
complete.</div>
</div>
</div>
<p>In essence, what this proves to us is that the operations V and I form a
contravariant equivalence of categories between
[tex]V:\mathcal{RA}lg_k[/tex] and [tex]\mathcal{AV}ar_k[/tex].</p>
</div>
Introduction to Algebraic Geometry (2 in a series)2008-02-21T22:43:00+01:00Michitag:blog.mikael.johanssons.org,2008-02-21:introduction-to-algebraic-geometry-2-in-a-series.html<p>I want to lead this sequence to the point where I am having trouble
understanding algebraic geometry. Hence, I won't take the usual course
such an introduction would take, but rather set the stage reasonably
quickly to make the transit to the more abstract themes clear.</p>
<p>But that's all a few posts away. For now, recall that we recognized
already that any variety is defined by an ideal, and that intersections
and unions of varieties are given by sums and intersections or products
of ideals.</p>
<p>This is the first page of what is known as the Algebra-Geometry
dictionary. The dictionary is made complete by a pair of reasonably
famous theorems. I won't bother proving them - the proofs are a good
chunk of any decent commutative algebra course - but I'll quote the
theorems and discuss why they matter.</p>
<p>We call a ring Noetherian if all ideals are finitely generated. If a
ring R is Noetherian, then quotients are Noetherian.</p>
<p><strong>Hilbert's Basis Theorem:</strong> If R is Noetherian, then so is R[x].</p>
<p>We define the radical [tex]\sqrt I[/tex] of an ideal I to be the ideal
consisting of all elements a such that some power of a is actually in I.
We call an ideal I <em>radical</em> if [tex]I=\sqrt I[/tex]. This concept is
relevant for our considerations since if for a point p the function
f^n(p) vanishes, then f(p) also vanishes. Thus, the set of points such
that f^n vanishes is the same set as the set of points where f vanishes.
The relevancy of this is captured in:</p>
<div class="line-block">
<div class="line"><strong>Hilbert's Nullstellensatz:</strong> Let k be an algebraically closed field.
For any ideal I in [tex]k[x_1,\dots,x_n][/tex], there is an
equality of ideals</div>
<div class="line-block">
<div class="line">[tex]I(V(I))=\sqrt I[/tex]</div>
</div>
</div>
<p>Note, for the statement of this theorem that we write V(I) for the
variety defined by simultaneous vanishing of all elements in I, and we
write I(V) for the ideal of all polynomials in
[tex]k[x_1,\dots,x_n][/tex] that vanish on all of I(V).</p>
<p>So - and here is the beautiful part - affine algebraic varieties
correspond bijectively to radical ideals in polynomial rings. For every
ideal, there is a variety and for every variety, there is an ideal. But
we can push this further.</p>
<div class="section" id="coordinate-rings">
<h2>Coordinate rings</h2>
<p>Let's consider polynomial functions from [tex]\mathbb A^n[/tex] to k.
These are precisely the polynomials in [tex]k[x_1,\dots,x_n][/tex].
Given a variety V, we can take a polynomial [tex]f\in
k[x_1,\dots,x_n][/tex] and restrict it to a function
[tex]f|_V:V\to k[/tex].</p>
<p>Two different polynomials give the same restricted function precisely
when their difference vanishes on all of V. So polynomial functions on V
are precisely the equivalence classes in the quotient ring
[tex]k[x_1,\dots,x_n]/I(V)[/tex]. We call the resulting ring the
<em>coordinate ring</em> and denote it by k[V].</p>
<p>Conversely, if R is a Noetherian k-algebra such that there are no
nilpotent elements in R, then R is a quotient of some polynomial ring
with some radical ideal. Hence it is the coordinate ring of some variety
in some affine space somewhere. We call a ring lacking nilpotents
<em>reduced</em>.</p>
<div class="line-block">
<div class="line">We get, out of all this, a bijective correspondence</div>
<div class="line-block">
<div class="line">{ Noetherian reduced k-algebras } [tex]\leftrightarrow[/tex] {
Affine algebraic varieties }</div>
</div>
</div>
<p>The really beautiful part will come in my next post. We can introduce
homomorphisms of varieties in a reasonably natural way so that this
bijective correspondence ends up being <em>functorial</em> - i.e. any
homomorphisms on one side gives rise to a corresponding homomorphism on
the other side. Thus, the categories of Noetherian reduced k-algebras
and of affine algebraic varieties are equivalent.</p>
</div>
Introduction to Algebraic Geometry (1 in a series)2008-02-21T12:33:00+01:00Michitag:blog.mikael.johanssons.org,2008-02-21:introduction-to-algebraic-geometry-1-in-a-series.html<p>I'm growing embarrassed by my lack of understanding for the
sheaf-theoretic approaches to algebraic (and differential) geometry.
I've tried to deal with it several times before, and I'm currently
reading up on Algebraic Geometry again to fill the void that the
finished thesis, soon arriving travels and non-existent job application
responses produce.</p>
<p>So, why not learn by teaching? It's an approach that has been pretty
darn good in the past. So I thought I'd write a sequence of posts on
algebraic geometry, introducing what it's supposed to be about and how
the main viewpoints develop more or less naturally from the approaches
taken.</p>
<div class="section" id="varieties">
<h2>Varieties</h2>
<div class="line-block">
<div class="line">The basic objective of algebraic geometry is to study solution sets to
systems of polynomial equations. That is, we take some set
[tex]f_1,\dots,f_r[/tex] of polynomials in some polynomial ring
[tex]k[x_1,\dots,x_n][/tex] over some field [tex]k[/tex]. And we
write [tex]V(f_1,\dots,f_r)[/tex] for the set of all simultaneous
roots to all these polynomials:</div>
<div class="line-block">
<div class="line">[tex]V(f_1,\dots,f_r)=\{p\in k^n:f_1(p)=0, \dots,
f_r(p)=0\}[/tex]</div>
</div>
</div>
<p>If we write our polynomials as coming from the ring
[tex]k[x_1,\dots,x_n][/tex], then the corresponding solution points
will be points in the vector space [tex]k^n[/tex]. In order to emphasize
that we do not care for the vector space structure of this space, we
shall denote it with [tex]\mathbb A^n[/tex], or if we want to emphasize
the field, with [tex]\mathbb A^n_k[/tex].</p>
<p>The first observation at this point is that if we take the polynomial
[tex]x^2+1[/tex], then the solution set over [tex]\mathbb R[/tex] is
empty, while the solution set over [tex]\mathbb C[/tex] is not. So, in
order to set all solution sets on an equal footing - and also to make
the later occurring correspondences work out - we shall require
[tex]k[/tex] to be an algebraically closed field. In other words, we can
always find a root to any polynomial.</p>
<p>We call the solution sets <em>varieties</em> (or - in order to distinguish from
everything else we might encounter, we shall call them <em>affine algebraic
varieties</em>).</p>
<p>So, the study of solutions to systems of polynomial equations is the
study of varieties. And hence geometry. This neatly expands on the
classical linear algebra viewpoint - where we study systems of linear
equations as intersections of planes. It turns out that the main
computational approach - Gr</p>
</div>
Scripting Games in Haskell2008-02-21T01:15:00+01:00Michitag:blog.mikael.johanssons.org,2008-02-21:scripting-games-in-haskell.html<p>I saw <a class="reference external" href="http://feeds.feedburner.com/~r/cerebratescontemplations/~3/238385247/scripting-games-event-1beginne.html">the Cerebrate
solve</a>
the first <a class="reference external" href="http://www.microsoft.com/technet/scriptcenter/funzone/games/games08/bevent1.mspx">Scripting Games challenge: Pairing
off</a>.
And immediately thought "I can do that in Haskell too".</p>
<p>So, here it is.</p>
<pre class="literal-block">
import Data.List
cards = [(1,7),(0,5),(3,7),(2,7),(2,13)]
countpairs [] = 0
countpairs [a] = 0
countpairs (a:as) = length . filter (((snd a)==) . snd) $ as
pairingOff = sum . map countpairs . tails
</pre>
<p>And that's that. Alas, the actual competition only takes Perl, VBScript
and PowerShell, so I won't be submitting this.</p>
PROPs and patches2008-02-15T19:59:00+01:00Michitag:blog.mikael.johanssons.org,2008-02-15:props-and-patches.html<p><a class="reference external" href="http://byorgey.wordpress.com">Brent Yorgey</a> wrote a post on <a class="reference external" href="http://byorgey.wordpress.com/2008/02/13/patch-theory-part-ii-some-basics/">using
category theory to formalize patch
theory</a>.
In the middle of it, he talks about the need to commute a patch to the
end of a patch series, in order to apply a patch undoing it. He suggests
a necessary condition to do this is that, given patches P and Q, we need
to be able to find patches Q' and P' such that PQ=Q'P', and preferably
such that Q' and P' capture some of the info in P and Q.</p>
<p>However, as such, this is not enough to solve the issue. For one thing,
we can set Q'=P and P'=Q, and things are the way he asks for.</p>
<p>Now, I wonder whether we can solve this by using PROPs (or possibly
di-operads or something like that). Let's represent a document as a list
of some sort of tokens. We'll set [tex]D_n[/tex] the set of all lists
of length [tex]n[/tex], and we'll set [tex]P_n^m[/tex] to denote
operations that take a list of length n and returns a list of length m.</p>
<p>One operation here is obvious - the identity operation. So we'll take
that into the mix. And we'll also want to have some manner of composing
operations. So, set [tex]\circ_i:P_n^m\times P_r^s\to
P_n^{m+s-r}[/tex] to be the operation that applies the second argument
to the elements i,i+1,...,i+m of the list outputted by the first
argument.</p>
<p>This way, we can make patch trees - using the identities to fill out
when a patch doesn't influence everything; and have composition of
operations represent composition of patches.</p>
<p>Now, the commutativity that Brent asks for would manifest as an
additional relation - on top of those inherent in the definition of a
PROP - to rebuild trees. One obvious one pops out immediately - as long
as the trees don't overlap, in other words, as long as the subtree we
want to reorganize is contractible (in the topological sense), we can
commute patches freely.</p>
<p>When trees -do- overlap, however, the undo operation Brent asks for is
much more tricky. Consider the following sequence of edits:</p>
<p>[] -> [a] -> [b] -> [cbd]</p>
<p>Now, undo the insertion of [a].</p>
<p>Sure, taken this way, we cheat a little. Maybe we want to restrict edit
descriptions to explicit deletions and additions. So we would have</p>
<p>[] -> [a] -> [] -> [b] -> [cb] -> [cbd]</p>
<p>and here we could probably move things around a bit easier. I don't
quite see how to do it, right now, though.</p>
Thesis written2008-02-15T14:13:00+01:00Michitag:blog.mikael.johanssons.org,2008-02-15:thesis-written.html<p>In a mean push, these last two weeks my advisor has read three different
drafts of my thesis. And I've worked on getting the corrections in
quickly. The last push started yesterday, when I got a bunch of
corrections in the morning, had the last draft ready at 4pm, and then
sat reading it myself until 1am.</p>
<p>My advisor took it home with him, spent the evening on it, and had his
batch of corrections in the morning.</p>
<p>Hence, today at 10-ish when I got myself in to the office, I had two
batches of corrections in front of me, and a printer closing at 2pm. So
I worked - and now, well, it's done.</p>
<p>That's it.</p>
<p>It'll get printed.</p>
<p>Then read.</p>
<p>In May, we should get all the comments back from the external examiners.</p>
<p>In July, I'll be up for two oral exams - one on homological algebra and
its uses as invariants for commutative algebra; and one on parallell
programming in cluster environments.</p>
<p>And on July 17th I plan to defend.</p>
<p>Damn - this is one heck of an adrenaline rush now.</p>
My topology students move into knot theory2008-02-01T14:27:00+01:00Michitag:blog.mikael.johanssons.org,2008-02-01:my-topology-students-move-into-knot-theory.html<p>So, here's the plan for my 10th grade topology students.</p>
<p>Today, we'll abandon algebraic topology completely, and instead go into
knot theory. I'll want to discuss what we mean by a knot (embedding of
[tex]S^1[/tex] in [tex]S^3[/tex]), what we mean by a knot deformation
(thus introducing isotopies while we're at it) and the Reidemeister
moves. Also we'll discuss knot invariants - and their use analogous to
topological invariants.</p>
<p>Later on, we'll continue with other invariants; definitely including the
Jones polynomial, and possibly even covering Khovanov homology. One
possible end report would be to explain a bunch of knot invariants and
show using examples how these have different coarseness.</p>
<p><em>Edited to add:</em> I got myself some damn smart students. They figured out
the Reidemeister moves on their own - as well as minimal crossing number
in a projection being highly relevant - with basically no prompting from
me. I'm impressed.</p>
Algebraic surface toys!2008-01-25T17:58:00+01:00Michitag:blog.mikael.johanssons.org,2008-01-25:algebraic-surface-toys.html<p>At the start of the <a class="reference external" href="http://www.jahr-der-mathematik.de/">German Year of
Mathematics</a>, the <a class="reference external" href="http://www.mfo.de">Oberwolfach
research institute</a> has released an exhibition and
the software they used to produce it. The software,
<a class="reference external" href="http://imaginary2008.de/surfer.php">surfer</a>, is a really nice GUI
that sits on top of <a class="reference external" href="http://surf.sourceforge.net">surf</a> and lets you
rotate and zoom your algebraic surfaces as well as pick colours very
comfortably.</p>
<p>They have a whole bunch of Really Pretty Images at <a class="reference external" href="http://imaginary2008.de">the exhibition
website</a>, and I warmly recommend a visit. If
you can get hold of the exhibition, they also have produced real models
- with a 3d-printer - of some of the snazzier surfaces, so that one
could have a REALLY close encounter with them.</p>
<p>But also, I'd really like to show you some of my own minor experiments
with the program.</p>
<div class="line-block">
<div class="line"><img alt="Tuba Mirum - the innards of a Klein Bottle" src="http://mikael.johanssons.org/surfer/tubamirum.png" /></div>
<div class="line-block">
<div class="line">This is the interior of a Klein Bottle, using the "standard"
realization as an algebraic surface given by Mathworld. In other
words, I'm using</div>
<div class="line">(x^2+y^2+z^2+2*y-1)*((x^2+y^2+z^2-2*y-1)^2-8*z^2)+16*x*z*(x^2+y^2+z^2-2*y-1)=0</div>
<div class="line">for the defining equation. It kinda looks a bit like a Sousaphone in
my opinion.</div>
</div>
</div>
<div class="line-block">
<div class="line"><img alt="Roman's surface - immersion of the real projective plane." src="http://mikael.johanssons.org/surfer/roman.png" /></div>
<div class="line-block">
<div class="line">Roman's surface, an immersion of the real projective plane into
3-dimensional euclidean space. It is given by the equation</div>
<div class="line">(x^2+y^2+z^2-9)^2-((z-3)^2-2*x^2)*((z+3)^2-2*y^2)=0</div>
<div class="line">and is one of the Steiner projections of the Veronese surface,
embedding the real projective plane into projective 5-dimensional
space by the homogenous parametrization (x^2,y^2,z^2,xy,xz,yz).</div>
</div>
</div>
<div class="line-block">
<div class="line"><img alt="Steiner's surface type 2 - immersion of the real projective plane." src="http://mikael.johanssons.org/surfer/steiner.png" /></div>
<div class="line-block">
<div class="line">With the defining equation</div>
<div class="line">x^2*y^2-x^2*z^2+y^2*z^2-x*y*z=0</div>
<div class="line">this Steiner surface can be transformed into the Roman surface above
if (and only if) you're allowed to take shortcuts over the points at
infinity. As it is, it has two pinches (both visible) and three lines
of self-intersections (also all visible, kinda sorta). It's also
unbounded - one of the reasons that you cannot get to the bounded
Roman surface easily.</div>
</div>
</div>
<p>With this as inspiration - go forth and draw surfaces. And when you do,
please show them to me too.</p>
Building my academic persona2008-01-18T16:26:00+01:00Michitag:blog.mikael.johanssons.org,2008-01-18:building-my-academic-persona.html<div class="line-block">
<div class="line"><a class="reference external" href="http://arxiv.org/abs/0707.1637">http://arxiv.org/abs/0707.1637</a></div>
<div class="line-block">
<div class="line">Just got accepted for publication in the Journal of Homotopy and
Related Structures.</div>
</div>
</div>
<p>Damn, this feels good!</p>
AMS-MAA JMM 2008 Liveblogging, day 4 - final day2008-01-10T08:41:00+01:00Michitag:blog.mikael.johanssons.org,2008-01-10:ams-maa-jmm-2008-liveblogging-day-4-final-day.html<p>Today, the congress ended.</p>
<p>I bought one book - Adams' Knot book, with free shipping, for $22.</p>
<p>And I drooled over one more - Kozlov's Combinatorial Algebraic Topology.
The hardcover was down from $99 to $70 at the congress stand, but still
was WAY outside my own budget capabilities.</p>
<p>Now, this book does algebraic topology on simplicial complexes. It does
everything I've wanted a reference for with simplicial complexes. And at
some point, I'll REALLY need to get it.</p>
<p>I listened to a bunch of talks on Mathematics and Arts - including one
on knitting hyperbolic pant crotches for toddlers - and one on an
analysis of a combinatorial game on graphs: "Flee from the Zombies" -
very entertaining.</p>
<p>I also spent an hour talking about the historical background of
[tex]A_\infty[/tex]-algebras and bialgebras with one of Ron Umble's
students.</p>
<p>Once all talks I wanted to hear were done, I went off to San Diego old
town, visited a yarn shop, and bought a skein of a deep navy blue wool
yarn and a pair of bamboo knitting needles. I've started a project where
I intend to knit myself a scarf with garter knit in alternating
directions. At each row of prime index, I change the direction of the
garter knit. We'll see how it turns out.</p>
<p>Tomorrow, I'll just be touristing; and possibly doing some thesis- or
application-work while I'm at it. I'd go see Balboa Park, the Museum of
Unnatural History, and possibly go down and look across the border on
Mexico. After all, it's less than 20min by trolley from my hotel to
Mexico...</p>
AMS-MAA JMM 2008 Liveblogging, day 32008-01-09T08:35:00+01:00Michitag:blog.mikael.johanssons.org,2008-01-09:ams-maa-jmm-2008-liveblogging-day-3.html<p>The day started bad. I overslept, went to the convention center, and
realized that I had forgotten my badge. Back to the hotel, and then back
to the convention center. By the time I got there, the first talk I
wanted to hear - one on a generalization of Kuratowski's theorem to
simplicial complexes - was already over by the time I got there.</p>
<p>So instead, I learned beading. I did two prototype versions of small and
neat little Borromean rings in golden seed beads and blue, shimmering
bugle beads. The SF fan / knitter / crafter who taught me was busy doing
earrings in the shape of torus knots. Gorgeous. She has a plan for doing
triple torus knots (solid spirals with bugle - seed - bugle - seed -
bugle - seed) interlinked like Borromean rings.</p>
<p>I held my own talk just after two talks on using computer-assisted proof
techniques for game theoretical analyses. The guy talking argued for
beautiful programs being more important than beautiful proof, which gave
me a great hook-in for my own talk.</p>
<p>And at least one guy was seriously interested in Haskell, partially due
to my own talk.</p>
<p>I also met one of Ron Umble's students. Gonna sit down with him tomorrow
and talk computation of A-infinity with him.</p>
<p>I had a plan for meeting bit-player, and maybe, maybe virtualcourtney,
in the evening. However, bit-player had to be present for the premier
showing of a new documentary about the US IMO team. So I went there too,
watched the movie, btu couldn't find bit-player.</p>
<p>So I drifted in the general area, and got pulled into a town meeting for
the youngmath.com network. They were collecting issues young
mathematicians need adressed, and ended up pushing me for issues as
well.</p>
<p>After all this, I was set to go home and crash in bed.</p>
<p>However, as I stood, waiting for a red light to switch, someone next to
me ask me "Are you a mathematician?" and then promptly invited me for
dinner. Irish pub. Guinness. Talisker. And really good Guinness beef
stew rolled in a pancake. And a marvelous chat. The two guys that
invited me out are running a tenure-track job search, and had a LOT of
really cool things to talk about. And they digged a lot of my various
thought, and came with marvelous feedback on my CV and how to read the
reactions I've had on my application material so far.</p>
<p>All in all, the day has been a blast.</p>
AMS-MAA JMM 2008 Liveblogging, day 22008-01-08T08:42:00+01:00Michitag:blog.mikael.johanssons.org,2008-01-08:ams-maa-jmm-2008-liveblogging-day-2.html<p>This was a packed day.</p>
<p>And yet, I had trouble finding anything in the talks I wanted to hear.</p>
<p>i woke up, went down to the employment schedule, and fetched my
interview schedule. Then I went to Frank's pancake house and ate their
World Famous Apple Pancake. The thing was 20cm high, covered a full
plate and incredibly delicious. It also cost more than I expected to
spend on breakfasts, but splurging once is alright.</p>
<p>Then I walked around, doing nothing much, and checked out the
universities I was assigned to interview with on the web. Small.
Teaching oriented. And in small towns. Both of them.</p>
<p>First interview went well enough, though I doubt I'll want to go there
and I doubt they'll want me either. I'm not convinced that a university
whose main claim to desirability is their pre-veterinary and equestrian
programs will agree with my severe horse allergy.</p>
<p>The second went even better. They were seriously enthusiastic about me,
insisted I send in a real application (they don't accept email - they
require paper applications), and were hinting about wanting me to settle
down with a tenure track position. They also made a big deal about
handling mathematics-linguistics dual career couples well.</p>
<p>These interviews made me realize one thing I've done wrong. My cover
letters. I sent them out reading basically</p>
<blockquote>
<p>Dear $COMMITTEE.</p>
<p>I would like to submit my application for $POSITION.</p>
<p>I believe I would be a good fit since my own research is tangential
to your departmental research, especially considering the areas
@AREA.</p>
<p>I have a 2-body problem, and you would be solving it.</p>
<p>I look forward to teaching.</p>
</blockquote>
<p>with variations as appropriate.</p>
<p>What I should have written would be more along the lines of</p>
<blockquote>
<p>Dear $COMMITTEE.</p>
<p>I would like to submit my application for $POSITION.</p>
<p>I believe I would be a good fit since my own research is tangential
to your departmental research, especially considering the areas
@AREA.</p>
<p>I have a 2-body problem, and you would be solving it.</p>
<p>I look forward to teaching.</p>
<p>Oh, and also, I have a kick-ass idea I've been working with for
several years, and that you should copy, with me at the helm. We
take a bunch of high schoolers, and we do undergraduate research
programs with them. Then, if we get insane amounts of NSF funding,
we'll send them off to the Junior Mathematical Congresses that the
Europeans keep on doing, or just organize one of those ourselves.</p>
<p>Furthermore, I believe it to be my sacred duty to spread my passion
about mathematics to the next generation.</p>
</blockquote>
<p>Lunch was had with <a class="reference external" href="http://learningcurves.blogspot.com">Rudbeckia
Hirta</a> at a really neat mexican
restaurant. Lunch was good. Meeting Becky was kinda like a fanboi
interview session - she'd seen me in her comments section, but she
doesn't trawl the blogosphere like some of us do, and hadn't read
anything of mine. So many of my questions had natural answers along the
lines of "Well, you blogged this" and there was a clear imbalance in
familiarity between us.</p>
<p>My talk went OK. Afterwards, I chatted a bit with a couple of guys
loosely associated to the UPenn A-infinity workgroup, who know Ron Umble
well. Great fun.</p>
<p>Dinner was had with Mary Williams, a Phoenix based SF-fandom knitter
mathematician (why do those seem to reinforce each other?), who
convinced me to come along to the knitting circle, and whom I bestowed
waaaay to many good design ideas. Putting us together started some sort
of catalysis that probably wasn't healthy.</p>
<p>We watched a talk about using sonofication and music as a teaching tool.
And then went to the knitting circle.</p>
<p>I learned to knit again, and started on a small projectlet embedding the
Pascal Triangle Sierpinski model in knitting by just flipping the
stitches. I didn't have time to make it appear clearly, but it was fun.</p>
<p>I returned from the knitting circle completely exhausted at just before
midnight. Goodnight.</p>
JMM Blogger Meetup!2008-01-07T16:30:00+01:00Michitag:blog.mikael.johanssons.org,2008-01-07:jmm-blogger-meetup.html<p>There's a bunch of us math bloggers on site in San Diego. Hence, here,
the call for a blogger meetup. We'll convene by the entrance to Hall B
(the one with the registration and the exhibitions) at 6pm on Tuesday
8th.</p>
<p>I'll be there, and so will <a class="reference external" href="http://bit-player.org">bit-player</a>. Join
in you too!</p>
AMS-MAA JMM 2008 Liveblogging, day 12008-01-07T05:04:00+01:00Michitag:blog.mikael.johanssons.org,2008-01-07:ams-maa-jmm-2008-liveblogging-day-1.html<p>I'm exhausted.</p>
<p>I'm completely exhausted.</p>
<p>And I just got through the first day.</p>
<p>However, I also managed to meet up with S from the university interested
in me. We had a really nice chat, and I feel rather good about it.</p>
<p>Other things done today - listened to an interesting talk generalizing
Koszul algebras based on the highest degree ring generator of the Ext
algebra. Listened to bits and pieces of a talk on Koszul and Verdier
duality. Saw Flatland - The Movie (with Martin Sheen playing the main
character, Arthur Square).</p>
<p>I also chatted with Cliff Stoll - whose sales pitch for the Klein
Bottles is immensely entertaining; NSA - who don't want me; Maplesoft -
who are interested in me; Mathematica - who pointed me to their website;
various e-Learning companies; and many many other exhibitors.</p>
<p>Also, got tired, hungry and WET. It's bloody raining here.</p>
AMS-MAA JMM 2008 Liveblogging, day 02008-01-06T07:47:00+01:00Michitag:blog.mikael.johanssons.org,2008-01-06:ams-maa-jmm-2008-liveblogging-day-0.html<p>The participation in the AMS-MAA Joint Mathematics Meeting sure got off
to a smashing start. If nothing else, the storm that hit the Californian
seaboard on January 4th ensured that.</p>
<p>I get out of bed at 3.30am, CET, not having been able to sleep
particularly well at all. At 4am, I drag myself out to the taxi; which
charges more to get me to the airport shuttle than the airport shuttle
itself does to get me out to the airport. I don't care that much - I
need the coddling at that ungodly hour.</p>
<div class="line-block">
<div class="line">Checking in and going down to Frankfurt is uneventful. Every single
passenger is transferring to either Eritrea or the US, and all but two
have managed to check their luggage through as well. n</div>
<div class="line-block">
<div class="line">The remaining two - me and one more poor bastard - have our luggage
rerouted to another conveyor belt. And no information about it. When,
after an hour, the "Stockholm pending" turns into "Stockholm
finished", I go to lost baggage, where they scan my slip and direct me
to the right conveyor belt. Where the bag happily reposes.</div>
</div>
</div>
<p>One hour lost.</p>
<p>As I get up to the United checkin desks, I'm met by a slightly harried
guy who asks me where I'm going, and then says that the flight has been
cancelled. I get to stand in another line to get this taken care of.</p>
<p>Half an hour later, I am rebooked. Instead of United Airlines FRA - SFO,
SFO - SAN, I fly Indian Air FRA - LAX, and then United LAX - SAN.</p>
<p>I check in, eat lunch and go through security without anything
remarkable happening. The relevant corner of Frankfurt Flughafen rates
among the most boring I have ever seen.</p>
<p>And then the flight arrives. Empties. And sits there.</p>
<p>An hour late, we get to board. And sit around waiting for another
undefinable while. And then, at last, we take off.</p>
<p>Inflight entertainment is geared to the indian market, and distributed
with individual headphones and roof-mounted TV-screens. I have seen a
bunch of weird stuff, including stalkerish bollywoodial music videos and
an amazing movie featuring the Panjabi rebellious girl with the
traditionalist parents, who recruits a depressed stonerich kid with an
inherited industrial empire as her aide in getting her big love. The
movie ends happily, and predictably, but has an astounding amount of
entertaining whackiness on the way there. I only wish I knew what it's
called. The female lead character is called Geet at any rate...</p>
<p>About thrice bored-out-of-my-head later we arrive in Los Angeles. Late.
I get out of the plane, through immigration (uneventful), get my luggage
(another hour lost) and through customs. Heaps of kudos to the
marvelously nice border police who rerouted an entire line with me at
its head to the unused and undetected open customs checkers. Generosity
and friendliness that warms the cockles of my heart as well as the
sub-cockle-area.</p>
<p>Then the farce begins. I get out of the terminal to the transit desks.
"Yeah, that departs too soon for us. You need to go to the terminal.
Outside and then right."</p>
<p>I run like mad to terminal 7. Halfway there, I keel over from asthma,
sleep deprivation and malnutrition. I arrive and am adviced to stand in
the hedged-in line of hopeless cases. Which is being served by "Our two
competent agents who are doing their best" - sorry, the empty desk.</p>
<p>And I stand there for an hour. I arrive in the hall in time to get on
the (heavily delayed) flight I was supposed to go on. But nobody wants
to take care of getting me the boarding card. I only get to speak to an
agent five minutes after that flight left.</p>
<p>She puts me on standby for the 10.30pm to San Diego. Which would have me
arrive in San Diego after over 29h of constant (and awake) travel. IF I
make the flight at all, that is. I try to talk backup plans and other
options, citing over and over again that United put me on Indian Air in
the first place and that I need to be in San Diego by tomorrow - having
both informal faculty interviews and workshops planned for that day. AND
that I need to get to my hotel before they void my reservation.</p>
<p>She just shrugs. And gives me the phone number to the closed customer
service hotlines.</p>
<p>I go through security on the verge of hysteria from malnutrition, sleep
deprivation and a mounting panic. A TSA agent asks me what's up, and
then tells me that I need to hold United to their obligations. Kudos to
law enforcement in general on this trip - they have been the source of
uplifting supportive comments and actions this trip.</p>
<p>I get through security without any problem. Dive into McDonalds. And
take care of the malnutrition part. With a filled stomach, my mind works
better and less panicky.</p>
<p>At the gate, they nod sympathetically and then give me a boarding pass.
I'm IN!</p>
<p>And shortly after that they tell me that 10.30pm got turned into 12.15am
since the machine needs to go to Santa Barbara and back before we can
ride it.</p>
<p>Here's hoping that 500 West has all-night checkin. And that restful
evening loading my batteries for the congress week kinda evaporated.
Meh.</p>
Calling all San Diego participants and Californians2007-12-20T13:51:00+01:00Michitag:blog.mikael.johanssons.org,2007-12-20:calling-all-san-diego-participants-and-californians.html<p>I'll be in San Diego for the <a class="reference external" href="http://ams.org/amsmtgs/2109_intro.html">AMS-MAA Joint Mathematics
Meetings</a>, January 5-11. I
would be happy to meet up with cool people, blog readers, blog writers
and what not - regardless of whether you actually will participate in
the meeting or not. Drop me an email (contact data in the [about] page
here) and we'll coordinate something.</p>
<p>Also, I'll be
<a class="reference external" href="http://www.ams.org/amsmtgs/2109_program_monday.html#2109:AMSCP22">speaking</a>
<a class="reference external" href="http://www.ams.org/amsmtgs/2109_program_tuesday.html#2109:SS39A">twice</a>.
Come listen - if you dare. ;)</p>
The year 2007 in review2007-12-19T12:57:00+01:00Michitag:blog.mikael.johanssons.org,2007-12-19:the-year-2007-in-review.html<p>From each month, the first sentence of the first post.</p>
<p>January: I decided on a whim to look in at the Dilbertblog, where the
top post at the moment has Scott Adams calling all atheists that discuss
on the net irrational, using a rather neat strawman carbon copy of most
discussions of faith between believers (i.e. mostly Christians) and
atheists he has seen on the web.</p>
<p>February: The second carnival of mathematics is up over at Good Math,
Bad Math.</p>
<p>March: I just met up with the workgroup in the Deutsche
Mathematikervereinigung (German Association of Mathematicians) with
interest spanning</p>
Papers status2007-12-16T10:34:00+01:00Michitag:blog.mikael.johanssons.org,2007-12-16:papers-status.html<p>I just received my first ever referee's report. Yikes!</p>
<p>Suffice to say, the report did not, as some I've seen blogged about,
tear me a new one. Far from it - it was civil, kind, and pointed out
several areas where my article text overlapped known arguments from
other people and was generally superfluous as well as several areas
where my article was too curt and didn't actually spell out the new
ideas sticking in it.</p>
<p>Also, making the relation of my results and those I rely on to the
results of the Grand Old Man in applying
[tex]A_\infty[/tex]-techniques in group cohomology explicit and
discuss these in more detail was requested.</p>
<p>I know I couldn't expect to write The Perfect Article as my first
submission ever. And it's not a flat out denial. And it brings
constructive comments about how to make this a better article. Still, I
think my ego needs a little bit of training to learn to cope with this
part of the review process.</p>
Planning the future2007-12-14T16:31:00+01:00Michitag:blog.mikael.johanssons.org,2007-12-14:planning-the-future.html<p>The last meeting with my 10th grade topology kids this year just
finished. We introduced singular homology, calculated the singular
homology of a point and discussed homeomorphism invariance.</p>
<p>Next term, we'll want to show homotopy invariance and that the singular
and simplicial homology coincide when applicable. After that, we'll
change directions slightly.</p>
<p>The future after that holds knot theory, was decided today. We'll want
to introduce knots, look at Reidemeister moves and basic knot
invariants. I use basic here in a pretty wide sense - we'll probably do
the Jones polynomial and we might even end up doing Khovanov homology if
I feel particularly insane late spring.</p>
Riding the Google Charts API bandwagon2007-12-13T15:05:00+01:00Michitag:blog.mikael.johanssons.org,2007-12-13:riding-the-google-charts-api-bandwagon.html<p>Last week, the news hit the blogosphere that Google had released a <a class="reference external" href="http://code.google.com/apis/chart/">beta
API for generating graphs</a> using
a reasonably easy and transparent GET parametrisation.</p>
<p>Inspired by this, and inspired by my early playing around with Ruby on
Rails, I decided to whack together a Rails plugin that takes care of
building the Google Charts IMG tag using what I hope is reasonably easy
to use syntax.</p>
<p>I have a <a class="reference external" href="http://www.mikael.johanssons.org/testground">test-site using random
data</a> up for playing
around with it.</p>
<p>The test-site as such runs on Ruby on Rails. The controller does some
parsing and setting up of relevant arrays, and primarily generates
random data for plotting.</p>
<p>The view has the following source code:</p>
<pre class="literal-block">
<%= google_chart(@data, @alt_text, @options) %>
<% form_tag "" do %>
Type
<%= select_tag "options[type]", options_for_select(@typeopts,@options["type"])
%>
Title
<%= text_field_tag "options[title]", @options["title"] %>
Granularity
<%= select_tag "options[encoding]", options_for_select(@encopts,@options["encoding"]) %>
<%= hidden_field_tag "datalength", 3 %>
<%= hidden_field_tag "dslengthmin", 5 %>
<%= hidden_field_tag "dslengthmax", 50 %>
<%= hidden_field_tag "dsmin", 0 %>
<%= hidden_field_tag "dsmax", 10000 %>
<%= submit_tag %>
<% end %>
</pre>
<p>where the <tt class="docutils literal">@options</tt> array is populated with the options tag from the
input pretty straight off.</p>
<p>The whole plugin resides at a <a class="reference external" href="http://rubyforge.org/projects/googlechart/">project of its own at
rubyforge</a>. So far, I
have not yet implemented color handling, grid lines or marker setting.
I'm happy to share code responsibility or getting suggestions for how to
handle the outstanding issues or improve on the code. Just get in touch
with me if you feel like contributing.</p>
Pluralization in Rails2007-12-09T22:13:00+01:00Michitag:blog.mikael.johanssons.org,2007-12-09:pluralization-in-rails.html<p>So, there is this one big and neat framework called Rails, building on
top of this one neat new programming language called Ruby.</p>
<p>And one of the things that makes Rails so Damn Neat is that if you only
set things up the right way around, it guesses almost everything you
need it to guess for you.</p>
<p>One of the ways it does this is by <em>pluralization</em>. Basically, the model
<tt class="docutils literal">Foo</tt> has a model defined in <tt class="docutils literal">app/model/foo.rb</tt> and it accesses the
database table <tt class="docutils literal">foos</tt>.</p>
<p>So, when talking a good friend through the basics, we created the table
<tt class="docutils literal">persons</tt> and generated the model <tt class="docutils literal">Person</tt>. And promptly got an
error from the framework.</p>
<p>It turns out that the pluralization of <em>person</em> is <em>people</em>. I wonder
what else irregularities they built into the system. If I have a model
called <tt class="docutils literal">Index</tt>, does Rails expect it to read from the database table
<tt class="docutils literal">indexes</tt> or from <tt class="docutils literal">indices</tt>?</p>
<p><em>Edited to add:</em></p>
<p>So, I found out that in the Rails console, I can fiddle around with
pluralization myself.</p>
<p>Which lead to this experiment</p>
<pre class="literal-block">
"index".pluralize
>> "index".pluralize
=> "indices"
>> "simplex".pluralize
=> "simplices"
>> "matrix".pluralize
=> "matrices"
>> "complex".pluralize
=> "complices"
</pre>
<p>And at this point, the chosen pluralization engine simply Does Not Work.
The pluralization of complex, at least as used in mathematics, is under
no circumstances complices. We'd say complexes. Matrices, on the other
hand, pluralizes correctly. And simplices pluralize correctly or
incorrectly depending on which mathematician you ask.</p>
Reaching for the postdoc world doorknocker2007-12-05T12:13:00+01:00Michitag:blog.mikael.johanssons.org,2007-12-05:reaching-for-the-postdoc-world-doorknocker.html<p>The last postdoc carnival for 2007 is coming to town, and given my
current position in my career, I thought I'd try to slowly edge into
that arena as well.</p>
<p>A short background blurb for those who haven't read this blog before -
and for those who haven't heard the story: I'm a mathematics PhD student
from Sweden in Germany, living apart from my wife for about 2</p>
Synaesthesia and cognition2007-12-03T08:52:00+01:00Michitag:blog.mikael.johanssons.org,2007-12-03:synaesthesia-and-cognition.html<p>So, there is this one condition called
<a class="reference external" href="http://en.wikipedia.org/wiki/Synaesthesia">synaesthesia</a>, where
basically perception gets crosslinked. Most commonly, numbers, letters,
and words get colours coupled to them. This way around, I have a few
friends who I know have it.</p>
<p>The more exotic varieties couple more or other senses to each other.</p>
<p>The whole thing gets Really Interesting, and ties in to quite a bit of
philosophy as well, when you start coming near the really odd cases.
<a class="reference external" href="http://en.wikipedia.org/wiki/Qualia">Qualia</a> are the philosophical
term for "how things are perceived by us". Basically, it boils down to
the following: if I see something red, is this intrinsic to the object,
or something existing in my perceptive neurons only?</p>
<p>And so far, arguing about it has been more or less all there was. At
least known to me.</p>
<p>Then I stumbled across one of the <a class="reference external" href="http://nielsenhayden.com/makinglight/archives/009679.html">latest
post</a> at
<a class="reference external" href="http://nielsenhayden.com/makinglight/">Making Light</a>. They link to
the story of a <a class="reference external" href="http://cosmicvariance.com/2007/11/05/martian-colors/">colourblind
synaesthete</a>.
Who perceives colour through his synaesthesis that he DOES NOT perceive
in the real world.</p>
<p>I don't really have much of a philosophical point of my own - I seem to
get entangled whenever I try to start writing one - more than Go! And
Read! The Making Light thread is a good starting point for this, and the
comments are - as is normal over there - absolutely brilliant.</p>
It's carnival time!2007-12-01T22:54:00+01:00Michitag:blog.mikael.johanssons.org,2007-12-01:its-carnival-time.html<p><a class="reference external" href="http://sbseminar.wordpress.com/2007/12/01/carnival-of-mathematics-21-bar-hopping-at-last/">Issue #
21</a>
in the <a class="reference external" href="http://carnivalofmathematics.wordpress.com">Carnival of Mathematics
series</a> is up now over at
the (not so) <a class="reference external" href="http://sbseminar.wordpress.com">Secret Blogging
Seminar</a>.</p>
<p>The resulting discussion there amuses at least me.</p>
Checking email 4000 times a day2007-11-20T18:12:00+01:00Michitag:blog.mikael.johanssons.org,2007-11-20:checking-email-4000-times-a-day.html<p>In <a class="reference external" href="http://chronicle.com/jobs/news/2007/11/2007112001c/careers.html">a recent
column</a>
at <a class="reference external" href="http://chronicle.com">The Chronicle of Higher Education</a>, the
columnist writes</p>
<blockquote>
I'm a latecomer to it, in part because I have a very hit-or-miss
interest in new technologies. (I still don't own a cell phone, for
example, though I check my e-mail 4,000 times a day.)</blockquote>
<p>Now. There are 24 hours in a day. 1 440 minutes. 86 400 seconds. Thus,
checking e-mail 4 000 times in a day would require you to check your
inbox every 21.6 seconds. Day and night.</p>
<p>Either the author is innumerate or hyperactive.</p>
Comments temporary disabled2007-11-19T16:41:00+01:00Michitag:blog.mikael.johanssons.org,2007-11-19:comments-temporary-disabled.html<p>Due to a spectacular spam storm incited by akismet.com being unreachable
from the webserver, I have decided to globally shut off commenting for
the time being.</p>
<p>This should be a temporary state, and I hope that the akismet issue
solves itself soon.</p>
High school topology restarting2007-11-16T16:34:00+01:00Michitag:blog.mikael.johanssons.org,2007-11-16:high-school-topology-restarting.html<p>Today, I told my two bright students about abstract and geometric
simplicial complexes, about the boundary map and the chain complex over
a ring R associated with a simplicial complex Δ, and assigned them
reading out of Hatcher's Algebraic Topology.</p>
<p>The next couple of weeks will be spent doing homology of simplicial
complexes, singular homology, equivalence of the two, neat things you
can do with them; and then we'll start moving towards a Borsuk-Ulam-y
topological combinatorics direction.</p>
<p>I might end up pulling combinatorics papers from my old "gang" in
Stockholm on graph complexes, and graph property complexes, and poke
around those with them.</p>
Beef sous vide2007-11-12T12:34:00+01:00Michitag:blog.mikael.johanssons.org,2007-11-12:beef-sous-vide.html<p>I tried out an idea from
<a class="reference external" href="http://blog.khymos.org/2007/01/21/perfect-steak-with-diy-sous-vide-cooking/">Khymos</a>
recently when inviting a bunch of friends over for a party. We took six
slices, about 1100g, of beef entrec</p>
Wreath products2007-10-29T22:11:00+01:00Michitag:blog.mikael.johanssons.org,2007-10-29:wreath-products.html<p>In a conversation on IRC, I started prodding at low-order wreath
products. It turned out to be quite a lot of fun doing it, so I thought
I'd try to expand it into a blog post.</p>
<p>First off, we'll start with a <strong>definition</strong>:</p>
<div class="line-block">
<div class="line">The wreath product [tex]H \wr_X G[/tex] is defined for groups G,H
and a G-set X by the following data. The elements of [tex]H \wr_X
G[/tex] are tuples [tex](h_{x_1},h_{x_2},\dots,h_{x_r};g)\in
H^{|X|}\times G[/tex]. The trick is in the group product. We define</div>
<div class="line-block">
<div class="line">[tex](h_{x_1},h_{x_2},\dots,h_{x_r};g)\cdot</div>
<div class="line">(h'_{x_1},h'_{x_2},\dots,h'_{x_r};g')= \\</div>
<div class="line">(h_{x_1}h'_{gx_1},h_{x_2}h'_{gx_2},\dots,h_{x_r}h'_{gx_r};gg')[/tex]</div>
<div class="line">(or possibly with a lot of inverses sprinkled into those indices)</div>
</div>
</div>
<p>Consider, first, the case of [tex]G=H=X=C_2[/tex] with the nontrivial
G-action defined by gx=1, g1=x. We get 8 elements in the wreath product
[tex]H \wr_X G[/tex]. Thus, the group is one of the groups with 8
elements - [tex]C_8, C_4\times C_2, C_2^3, Q, D_4[/tex]. We shall
try to identify the group in question using orders of elements as the
primary way of recognizing things. Consider an element ((x,y),z).</p>
<p>If z=1, then this squares to ((x:sup:<cite>2</cite>,y<sup>2</sup>),1) = ((1,1),1), so
all these elements have order 2 (except for ((1,1),1) which has order
1).</p>
<p>If z=g, then the element squares to ((xy,yx),1). So if x=y, then the
element squares to the identity, and thus has order 2. Remains two
elements where this is nontrivial - namely ((1,h),g) and ((h,1),g) -
both of which square to ((h,h),1), which squares to the identity. And
thus these two elements have order 4.</p>
<div class="line-block">
<div class="line">Thus, no element of order 8 - and thus the group is not
[tex]C_8[/tex]. It has elements of order 4, and thus it's also not
[tex]C_2^3[/tex]. A reasonable question at this point is whether it's
abelian. Consider the following products.</div>
<div class="line-block">
<div class="line">((1,h),g) * ((1,h),1) = ((h,h),g)</div>
<div class="line">((1,h),1) * ((1,h),g) = ((1,1),g)</div>
<div class="line">thus eliminating [tex]C_4\times C_2[/tex].</div>
</div>
</div>
<p>This leaves [tex]Q,D_4[/tex]. At this point, I'm basically going to
make a lucky guess and give an isomorphism [tex]H\wr_X G = D_4[/tex].
Let [tex]D_4=\langle a,b\mid a^4=b^2=abab=1\rangle[/tex].</p>
<div class="line-block">
<div class="line">We send ((1,h),g) = a, giving ((h,h),1) = a<sup>2</sup> and ((h,1),g) =
a<sup>3</sup> = a<sup>-1</sup>. We then pick any one of the remaining
nontrivial elements, say ((1,h),1) = b. We need to verify the
relations in the group presentation. a<sup>4</sup>=1 and
b<sup>2</sup>=1 are already reasonably obvious. So we compute</div>
<div class="line-block">
<div class="line">((1,h),g)*((1,h),1)*((1,h),g)*((1,h),1) =</div>
<div class="line">((h,h),g)*((h,h),g) = ((1,1),1). Which concludes the identification.</div>
</div>
</div>
<div class="line-block">
<div class="line">For a second take, we consider X to have trivial G-action. Then, given
[tex]x,y,z,a,b,c\in C_2[/tex], we get the multiplication</div>
<div class="line-block">
<div class="line">((x,y),z)*((a,b),c) = ((xa,yb),zc) - which is precisely the
multiplication in [tex](C_2\times C_2)\times C_2[/tex]. Thus,
this wreath product is just [tex]C_2^3[/tex].</div>
</div>
</div>
<div class="line-block">
<div class="line">Finally, for a more challenging group identification, we consider
[tex]G=X=C_3[/tex] with gx = x<sup>2</sup>. Elements in the wreath
product [tex]H \wr_X G[/tex] have the form [tex]((x,y,z),w)\in
H^3\times G[/tex], and we get the multiplication as</div>
<div class="line-block">
<div class="line">((x,y,z),w)*((a,b,c),d) = ((x,y,z)w(a,b,c),wd).</div>
</div>
</div>
<div class="line-block">
<div class="line">The group will be of order 24, so there's a few more to choose from.
For an elimination of all the abelian groups, consider</div>
<div class="line-block">
<div class="line">((h,1,1),g) * ((1,h,1),g) = ((h,h,1),g:sup:<cite>2</cite>)</div>
<div class="line">((1,h,1),g) * ((h,1,1),g) = ((1,1,1),g:sup:<cite>2</cite>)</div>
</div>
</div>
<div class="line-block">
<div class="line">Now, all elements ((x,y,z),1) have order 2, so to maximize order, we
need something like ((x,y,z),g). Now consider the powers of this
element.</div>
<div class="line-block">
<div class="line">((x,y,z),g)</div>
<div class="line">((xz,yx,zy),g:sup:<cite>2</cite>)</div>
<div class="line">((xzy,yxz,zyx),1) - which if exactly two of x,y,z are non-identity
vanishes, and otherwise is nontrivial.</div>
<div class="line">((xzyx,yxzy,zyxz),g)</div>
<div class="line">((xzyxz,yxzyx,zyxzy),g:sup:<cite>2</cite>)</div>
<div class="line">((xzyxzy,yxzyxz,zyxzyx),1) = ((1,1,1),1)</div>
</div>
</div>
<p>So, no element has order more than 6. This, again, rules out many of the
candidate groups.</p>
<div class="line-block">
<div class="line">To be specific - we can now partition the elements after order.</div>
<div class="line-block">
<div class="line">((1,1,1),1) - 1 element, order 1</div>
<div class="line">((x,y,z),1) - 7 elements, order 2</div>
<div class="line">((1,1,1),g), ((h,h,1),g), ((h,1,h),g), ((1,h,h),g),</div>
<div class="line">((1,1,1),g:sup:<cite>2</cite>), ((h,h,1),g:sup:<cite>2</cite>), ((h,1,h),g:sup:<cite>2</cite>),
((1,h,h),g:sup:<cite>2</cite>) - 8 elements, order 3</div>
<div class="line">((h,1,1),g), ((1,h,1),g), ((1,1,h),g), ((h,h,h),g),</div>
<div class="line">((h,1,1),g:sup:<cite>2</cite>), ((1,h,1),g:sup:<cite>2</cite>), ((1,1,h),g:sup:<cite>2</cite>),
((h,h,h),g:sup:<cite>2</cite>) - 8 elements, order 6</div>
</div>
</div>
<div class="line-block">
<div class="line">At this point, I grab Google as a research assistant. It turns up <a class="reference external" href="http://web.usna.navy.mil/~wdj/gap/small_groups.html">15
groups of order
24</a>. Using the
small groups database enumeration, we find:</div>
<div class="line-block">
<div class="line">24.1: has an element of order 12.</div>
<div class="line">24.2: has an element of order 24. Also is abelian.</div>
<div class="line">24.3: has an element of order 4.</div>
<div class="line">24.4: has an element of order 4.</div>
<div class="line">24.5: has an element of order 4.</div>
<div class="line">24.6: has an element of order 4.</div>
<div class="line">24.7: has an element of order 4.</div>
<div class="line">24.8: has an element of order 4.</div>
<div class="line">24.9: is abelian.</div>
<div class="line">24.10: has an element of order 4.</div>
<div class="line">24.11: has an element of order 4.</div>
<div class="line">24.12: has an element of order 4.</div>
<div class="line">24.13: A<sub>4</sub>xC<sub>2</sub></div>
<div class="line">24.14: D<sub>6</sub>xC<sub>2</sub></div>
<div class="line">24.15: is abelian.</div>
</div>
</div>
<p>Now, with the candidate groups narrowed down this far, we consider the
two remaining groups, and (in my case, using Magma), and grab the number
of elements of each order.</p>
<div class="line-block">
<div class="line">A<sub>4</sub>xC<sub>2</sub>:</div>
<div class="line-block">
<div class="line">1 element of order 1</div>
<div class="line">7 elements of order 2</div>
<div class="line">8 elements of order 3</div>
<div class="line">8 elements of order 6</div>
</div>
</div>
<div class="line-block">
<div class="line">D<sub>6</sub>xC<sub>2</sub>:</div>
<div class="line-block">
<div class="line">1 element of order 1</div>
<div class="line">15 elements of order 2</div>
<div class="line">2 elements of order 3</div>
<div class="line">6 elements of order 6</div>
</div>
</div>
<p>which seals the deal. The group we've constructed is
A<sub>4</sub>xC<sub>2</sub>.</p>
<p>Exhibiting an explicit isomorphism goes beyond what I feel like doing
right this evening, and leave it as a nice exercise for the interested
reader.</p>
Progress2007-09-26T04:25:00+02:00Michitag:blog.mikael.johanssons.org,2007-09-26:progress.html<pre class="literal-block">
dynkin:~/magma> magma
Magma V2.14-D250907 Wed Sep 26 2007 13:19:51 on dynkin [Seed = 1]
Type ? for help. Type -D to quit.
Loading startup file "/home/mik/.magmarc"
> Attach("homotopy.m");
> Attach("assoc.m");
> Aoo := ConstructAooRecord(DihedralGroup(4),10);
> S := CohomologyRingQuotient(Aoo`R);
> CalculateHighProduct(Aoo,[x,y,x,y]);
z
> exit;
Total time: 203.039 seconds, Total memory usage: 146.18MB
</pre>
<p>And this is one major reason for the lack of updates recently.</p>
Settling in in Sydney2007-09-18T02:26:00+02:00Michitag:blog.mikael.johanssons.org,2007-09-18:settling-in-in-sydney.html<p>No mathematical content today. However, I do note that the mathematics
department in Sydney is located in a building as drab and boring as the
Stockholm University main building. Its main architectural feature is
the pale, washed out blue panels on the upper parts of the hollow
concrete slab.</p>
<p>Just a short way away, though, we find the Quadrangle - a cathedral in
the religion of learning, and the main building of Sydney campus.
Complete with stained glass windows and stucco heraldic designs, all
dedicated to the branches of scholarship.</p>
<p>Alas, the splendour suffers slightly from the extensive road
construction work, which has just about managed to fence in and tear up
almost all the tarmac in this corner of campus.</p>
Still away2007-09-09T12:19:00+02:00Michitag:blog.mikael.johanssons.org,2007-09-09:still-away.html<p>However, I am enjoying the Scottish countryside and just - today -
turned [tex]3^3[/tex] years of age.</p>
Coq and simple group theory2007-08-05T21:15:00+02:00Michitag:blog.mikael.johanssons.org,2007-08-05:coq-and-simple-group-theory.html<p>Trying to make the time until my flight leaves tomorrow go by, I played
around a bit with the proof assistant Coq. And after wrestling a LOT
with the assistant, I ended up being able to prove some pretty basic
group theory results.</p>
<p>And this is how it goes:</p>
<pre class="literal-block">
Section Groups.
Variable U : Set.
Variable m : U -> U -> U.
Variable i : U -> U.
Variable e : U.
Hypothesis ass : forall x y z : U, m x (m y z) = m (m x y) z.
Hypothesis runit : forall x : U, m x e = x.
Hypothesis rinv : forall x : U, m x (i x) = e.
</pre>
<p>This sets the stage. It defines a group as a <a class="reference external" href="http://unapologetic.wordpress.com/2007/08/04/group-objects/">group object in
Set</a>,
but without the diagonal map. It produces a minimal definition - the
left identity and inverse follow from the right ones, which we shall
prove immediately.</p>
<p>First off, the right unit is the only idempotent in the group.</p>
<pre class="literal-block">
Lemma unit_uniq : forall x : U, m x x = x -> x = e.
intros.
rewrite <- runit with x.
rewrite <- (rinv x).
rewrite ass.
rewrite H.
reflexivity.
Qed.
</pre>
<p>Now, <tt class="docutils literal">intros.</tt> is a way to automatically name all hypotheses in the
statement. <tt class="docutils literal">reflexivity</tt> tells us that if we could derive <tt class="docutils literal">A=A</tt>,
then we're done. And <tt class="docutils literal">rewrite</tt> applies a known result everywhere that
it could possibly could be applied. The arrows tell us which direction
the equalities should be employed.</p>
<p>Now, our right inverse is a left inverse:</p>
<pre class="literal-block">
Lemma linv : forall x : U, m (i x) x = e.
intros.
apply unit_uniq.
rewrite <- ass with (i x) x (m (i x) x).
rewrite ass with x (i x) x.
rewrite rinv.
rewrite ass.
rewrite runit.
reflexivity.
</pre>
<p>and our right unit is a left unit:</p>
<pre class="literal-block">
Lemma lunit : forall x : U, m e x = x.
intros.
rewrite <- (rinv x).
rewrite <- ass.
rewrite linv.
rewrite runit.
reflexivity.
Qed.
</pre>
<p>In a group, we have cancellation laws. We can remove left and right
multiplied factors:</p>
<pre class="literal-block">
Lemma lcancel: forall x y z : U, y = z -> m x y = m x z.
intros.
replace y with z.
reflexivity.
Qed.
Lemma rcancel: forall x y z : U, x = y -> m x z = m y z.
intros.
replace x with y.
reflexivity.
Qed.
</pre>
<p>and we can add them again</p>
<pre class="literal-block">
Lemma lcocancel: forall x y z : U, m x y = m x z -> y = z.
intros.
rewrite <- lunit.
rewrite <- (lunit y).
rewrite <- (linv x).
rewrite <- ass.
rewrite <- ass.
rewrite H.
reflexivity.
Qed.
Lemma rcocancel: forall x y z : U, m x z = m y z -> x = y.
intros.
rewrite <- runit.
rewrite <- (runit x).
rewrite <- (rinv z).
rewrite ass.
rewrite ass.
rewrite H.
reflexivity.
Qed.
</pre>
<p>There's most probably something obscure going on with the proof model in
Coq: I often get the feeling that I run backwards when I write the
proofs. Thus, the result we need to be able to perform a cancellation
when we're rewriting our proof goal is <tt class="docutils literal">y = z <span class="pre">-></span> m x y = m x z</tt> and
not the other way around, which surprises me a bit.</p>
<p>The inverses are unique:</p>
<pre class="literal-block">
Lemma inv_uniq: forall x y : U, m x y = e -> y = i x.
intros.
apply (lcocancel x).
rewrite rinv.
apply H.
Qed.
</pre>
<p>and the well known product rule
(xy):sup:<cite>-1</cite>=y<sup>-1</sup>x<sup>-1</sup> holds:</p>
<pre class="literal-block">
Lemma prod_inv: forall x y : U, i (m x y) = m (i y) (i x).
intros.
apply (lcocancel (m x y)).
rewrite rinv.
rewrite <- ass.
rewrite ass with y (i y) (i x).
rewrite rinv.
rewrite lunit.
symmetry in |- *; apply rinv.
Qed.
</pre>
<p>Finally, a slightly larger theorem is the statement that if all elements
of a group have order 2, then the group is commutative. This is, in Coq,
the following argument:</p>
<pre class="literal-block">
Theorem ord2_comm: (forall x : U, m x x = e) -> (forall y z : U, m y z = m z y).
intros.
rewrite <- (runit z).
symmetry in |- *.
rewrite <- lunit.
replace (m e (m y (m z e))) with (m (m z z) (m y (m z (m y y)))).
rewrite ass.
rewrite ass.
rewrite ass.
apply rcancel.
rewrite <- ass.
rewrite <- ass with z z y.
rewrite <- ass with z (m z y) (m z y).
rewrite H.
reflexivity.
rewrite H.
rewrite H.
reflexivity.
Qed.
</pre>
<p>It doesn't really end up being more legible than if I had written it out
myself, but it is machine checkable, and not THAT far from being legible
as well. If you step through, you certainly do realize the mapping
between Coq and pen-n-paper pretty immediately.</p>
<p>Sometime in the future, I'll sit down and do some category theory and
topology in Coq. I think the 5-lemma and other diagram chases will end
up being amusing.</p>
Politics and absences2007-08-04T20:21:00+02:00Michitag:blog.mikael.johanssons.org,2007-08-04:politics-and-absences.html<p>First off, <a class="reference external" href="http://www.maths.manchester.ac.uk/~avb/micromathematics/">Alexander
Borovik</a>
has
<a class="reference external" href="http://www.maths.manchester.ac.uk/~avb/micromathematics/2007/07/gold-sand-in-stream.html">been</a>
<a class="reference external" href="http://www.maths.manchester.ac.uk/~avb/micromathematics/2007/07/photos-from-mathematical-village_26.html">writing</a>
a couple of times about a REALLY nice-sounding mathematical village in
Turkey.</p>
<p>And it turns out, <a class="reference external" href="http://www.maths.manchester.ac.uk/~avb/micromathematics/2007/08/blackboard-under-arrest.html">the
village</a>
<a class="reference external" href="http://www.maths.manchester.ac.uk/~avb/micromathematics/2007/08/blackboard-under-arrest-ii.html">got
closed</a>
this summer, with the government officials citing "education without
permission" as their reason to close it.</p>
<p><a class="reference external" href="http://www.maths.manchester.ac.uk/~avb/micromathematics/2007/08/blackboard-under-arrest-petition.html">Alexander
is</a>
<a class="reference external" href="http://savesummerschool.blogspot.com/">sending a petition</a> to the
prime minister. You should sign it.</p>
<p>In other news, I'm currently just waiting for Monday to come along. Why
Monday? Because that's the day I'm going back to Stockholm again. Once
there, I'll spend a couple of weeks spending time with friends and
family, and then I will go and vow fidelity and those other things. The
25th of August, in case you're about to ask.</p>
<p>All in all, this means that posting will be sparse if existent until
mid-September - when I arrive, fresh out of my honeyforthnight, to
Sydney; where the Magma research group host me for some 5-odd weeks.
There I expect to have office space, an internet connection and a
computer.</p>
<p>See y'all then.</p>
Advertising policy2007-08-01T16:38:00+02:00Michitag:blog.mikael.johanssons.org,2007-08-01:advertising-policy.html<p>Today I received an email kind of convincing me that my blog gets seen.
It offered me $35 to put up an add for a phone service on one of my old
blog posts.</p>
<p>What differentiated this offer from all other spam I get was that it was
actually written well enough, and tailored enough, that I believe this
guy would even go through with it. Only ...</p>
<p>I am not interested.</p>
<p>I run this blog because I like running it. I do system admin myself too.
The domain name is mine since my family wants it, and my parents chip
in. The net connection also is something that the family chips in on,
and is handled without significant cost.</p>
<p>All in all, I do not NEED ads to keep this place up and running.</p>
<p>Furthermore, I do not WANT ads here. I want this place to be my venue of
expression. Not some advertiser or other. I want complete control of my
content, and no bullshit with banners or whatever.</p>
<p>If you want me to recommend something of yours, make sure I get to try
it. I write about things I end up liking occasionally, and the best way
to get me to write about your things is to produce things I like and
make sure I'm exposed to them.</p>
<p>But I will not publish your ads.</p>
New additions to the blogosphere2007-07-29T22:20:00+02:00Michitag:blog.mikael.johanssons.org,2007-07-29:new-additions-to-the-blogosphere.html<p>... or another bout of more-or-less shameless self-promotion.</p>
<p>I took the initiative, and invited some of the relevant Powers That Be
to start an [tex]*_\infty[/tex]-themed group blog: <a class="reference external" href="http://infinity.blogseminar.net">The Infinite
Seminar</a>.</p>
<p>I also perceived a lack of blog aggregators, so I started <a class="reference external" href="http://planet.blogseminar.net">Planet Math
Blogseminars</a> to aggregate group blogs
in mathematics.</p>
<p>While I was at it, I bought the blogseminar.net domain. I'd be happy to
allocate subdomains of this to decent enough blogs that wants in on it.</p>
13 on a friday2007-07-27T10:35:00+02:00Michitag:blog.mikael.johanssons.org,2007-07-27:13-on-a-friday.html<div class="line-block">
<div class="line">The new <a class="reference external" href="http://polymathematics.typepad.com/polymath/2007/07/triskaidekaphil.html">carnival of
mathematics</a>
is up over at
<a class="reference external" href="http://polymathematics.typepad.com">PolyMathematics</a>.</div>
<div class="line-block">
<div class="line">Yours truly is featured, but other than that, there seems to be heavy
overweight on the educator side.</div>
</div>
</div>
<p>Do we have the volume for a Carnival of Research Mathematics?</p>
AucTeX hackery2007-07-17T15:16:00+02:00Michitag:blog.mikael.johanssons.org,2007-07-17:auctex-hackery.html<p>One thing that has been bugging me for quite some time with AucTeX
(which I love, in general) has been that I wasn't able to reset the
bloody hot key for math mode input.</p>
<p>The original setting maps to Shift-key left of backspace-space, since
it's an accent key which I occasionally use for .. y'know .. accenting
letters, and thus don't want immediate output from. And
<tt class="docutils literal"><span class="pre">M-x</span> <span class="pre">set-variable</span> <span class="pre">LaTeX-math-abbrev-prefix</span></tt> didn't do anything close
to what I expected.</p>
<p>Today, I, on a whim, go and search the auctex mailing list archives for
this. And lo and behold! One of the first messages tells me that I need
to do <tt class="docutils literal"><span class="pre">M-x</span> <span class="pre">customize-variable</span> <span class="pre">LaTeX-math-abbrev-prefix</span></tt>. So I do, and
it has my changes already, but not committed, so after committing the
changes I try it out and it just works!</p>
<p>This should speed up usage of Emacs for me a bit.</p>
In-between times2007-07-17T12:25:00+02:00Michitag:blog.mikael.johanssons.org,2007-07-17:in-between-times.html<p>These are the times that eat my productivity. The times that ensure that
entire days go by and I afterwards feel nothing have happened at all.
These times that are too short for productive work - where I know from
the beginning that I cannot sit down and <em>do something</em> - too little
time for reading, for coding, for writing, for .. well .. anything. And
yet, while trying to get through them, they are obviously too long. An
hour here, an hour there, interspersed with lunch, then coffee, then a
seminar, and all of a sudden out of an 8 hour workday, the only vaguely
productive thing that got done was hearing the seminar.</p>
<p>Fragmentation kills my productivity. With a fragmented workday, I have
the time available neatly chopped up in pieces of free time that fall
in-between. That are too short, but yet cover almost the entire workday.</p>
<p>And I end up reading programming.reddit until I can find nothing
noteworthy anymore. And I prowl my blogroll, annoyed at the lack of
updates since an hour ago. Or worse: I sit down and try to convince my
Emacs that ` is a bad key for the LateX-math-abbrev-prefix, and try to
tell it that it should use</p>
Today seems to be a day for posting...2007-07-12T20:20:00+02:00Michitag:blog.mikael.johanssons.org,2007-07-12:today-seems-to-be-a-day-for-posting.html<p><a class="reference external" href="http://complexzeta.wordpress.com">ComplexZeta</a> <a class="reference external" href="http://blog.mikael.johanssons.org/archive/2007/07/the-why-and-the-what-of-homological-algebra/">asked me about the
origins of my
intuitions</a>
for homological algebra in my recent post. The answer got a bit lengthy,
so I'll put it in a post of its own.</p>
<p>I find Weibel very readable - once the interest is there. It's a good
reference, and not as opaque as, for instance, the MacLane +
Hilton-Stammbach couplet can be at points.</p>
<p>The interest, however, is something I blame my alma mater for. Once upon
a time, Jan-Erik Roos went to Paris and studied with Grothendieck. When
he got back, he got a professorship at Stockholm University without
having finished his PhD. He promptly made sure that nowadays (when he's
an Emeritus stalking the halls) there is not a single algebraist at
Stockholm University without some sort of intuition for homological
algebra.</p>
<p>So, my MSc advisor, J</p>
Slumps and crunches2007-07-12T19:59:00+02:00Michitag:blog.mikael.johanssons.org,2007-07-12:slumps-and-crunches.html<p>This term of teaching ends next week.</p>
<p>When I got back from T'bilisi, just over a month ago, I had research
leads that I expect will end in three different publications.</p>
<p>I was slated with writing one LARP report for a swedish gaming magazine,
and a series of various popular mathematics articles for the local
student-run mathematics magazine here.</p>
<p>All in all, very many things converged this June/July for me.</p>
<p>It has started paying off though - the gaming article is published, and
yesterday I submitted the first of the T'bilisi articles to the <a class="reference external" href="http://rmi.acnet.ge/jhrs/">Journal
of Homotopy and Related Structures</a> as
well as to the arXiv.</p>
<p>I now am listed on the arXiv with three papers, out of which <a class="reference external" href="http://arxiv.org/abs/math/0502348">one is
already published</a>, <a class="reference external" href="http://arxiv.org/abs/math/0610374">one is
rejected</a> (not unjustly so), and
one is <a class="reference external" href="http://arxiv.org/abs/0707.1637">just submitted for review</a>.</p>
<p>Note that the name changed for the last paper. I am cheating a very
little bit - August 25th, I will marry the most marvelous woman I have
ever met, and will - among other reasons for the sake of academic
unicity - take her last name in addition to my own.</p>
<p>Aaaaanyway. This is my excuse for having missed .. what is it? three
carnival issues?</p>
The why and the what of homological algebra2007-07-12T19:35:00+02:00Michitag:blog.mikael.johanssons.org,2007-07-12:the-why-and-the-what-of-homological-algebra.html<p>I seem to have become the Goto-guy in this corner of the blogosphere for
homological algebra.</p>
<p>Our beloved <a class="reference external" href="http://unapologetic.wordpress.com">Dr. Mathochist</a> just
<a class="reference external" href="http://unapologetic.wordpress.com/2007/07/11/what-is-knot-homology/">gave
me</a>
the task of taking care of any readers prematurely interested in it
while telling us all just a tad too little for satisfaction about
Khovanov homology.</p>
<p>And I received a letter from the Haskellite crowd - more specifically
from <a class="reference external" href="http://www.alpheccar.org">alpheccar</a>, who keeps on reading me
writing about homological algebra, but doesn't know where to begin with
it, or why.</p>
<p>I <a class="reference external" href="http://blog.mikael.johanssons.org/archive/2006/01/introduction-to-algebraic-topology-and-related-topics-i/">have
already</a>
a <a class="reference external" href="http://blog.mikael.johanssons.org/archive/2006/02/monads-algebraic-topology-in-computation-and-john-baez/">few
times</a>
written about homological algebra, algebraic topology and <a class="reference external" href="http://blog.mikael.johanssons.org/archive/2006/05/my-first-group-cohomology-did-i-screw-up/">what
it</a>
<a class="reference external" href="http://blog.mikael.johanssons.org/archive/2006/11/a-for-the-layman/">is I
do</a>,
<a class="reference external" href="http://blog.mikael.johanssons.org/archive/2006/07/carry-bits-and-group-cohomology/">on
various</a>
<a class="reference external" href="http://blog.mikael.johanssons.org/archive/2006/07/triangulated-categories/">levels of
difficulty</a>,
but I guess - especially with <a class="reference external" href="http://carnivalofmathematics.wordpress.com">the
carnival</a> dry-out I've
been having - that it never hurts writing more about it, and even trying
to get it so that the non-converts understand what's so great about it.</p>
<p>So here goes.</p>
<p>Alpheccar writes that to his understanding, the idea is to build
topological spaces out of algebraic gadgets, and then do topology on
them. This is a part of the story, and certainly historically very
important, but it is far from all of it.</p>
<div class="section" id="motivations">
<h2>Motivations</h2>
<p>The revolution for homological algebra pretty much started with
Eilenberg-MacLane - who wrote an articlebook that did the constructions
necessary for the very topological versions of homological algebra - but
without ever involving the actual topological spaces.</p>
<p>The point is that the way you do algebraic topology is that you tend to
set up a functor Top → R-ChMod that assigns a chain complex of R-modules
to each (nice enough) topological space, and then you add functors
R-ChMod → R-Mod that extract informations from these. Typical examples
are cellular chain complexes with coefficients somewhere nice for the
first functor, and then homology or cohomology for the second functor -
depending on what viewpoint is the most obvious.</p>
<p>The revolution was that we simply throw out that first functor.</p>
<p>In order to study (co)homology, we don't really need to care that there
was a topological space somewhere to begin with. We only, really, need a
nice enough category of chain complexes (if it's abelian, then that's
fine - we get the long exact sequences in homology and other niftiness
easily then, but if it's not, triangulated will do...) and we study
certain types of functors from these to module categories.</p>
<div class="section" id="homological-algebra-as-a-tool-for-algebraic-topology">
<h3>Homological algebra as a tool for algebraic topology</h3>
<p>Since, in the viewpoint introduced above, homological algebra is a part
of the process used in algebraic topology, it turns out to be really
neat to sit down and just prove a lot of neat results in homological
algebra - with the background that at some later point, these might be
useful once you sit down with the topology. I got hold of this
particular point early - I had started my MSc thesis work in homological
algebra before I took my first real topology course, and during that
course, the less pointset topology and the more algebraic topology we
did, the easier everything got. The fundamental results we needed to
grasp to do algebraic topology in any amount of seriosity were basically
just applications of all the cornerstone results in homological algebra,
and thus perfectly obvious to my clique of arrogant undergrads.</p>
<p>This particular piece goes far. Why don't we need to worry about whether
we're doing homology or cohomology? Answer: since Ext and Tor are dual
in certain specific ways, which ends up meaning that although internal
algebraic structures might be finicky, the module structure is very
neat, and in k-Mod = Vect<sub>k</sub>, we end up with no worries anywhere.</p>
</div>
<div class="section" id="testing-grounds">
<h3>Testing grounds</h3>
<p>The viewpoint of homological algebra as a tool for algebraic topology
goes pretty deep. When I ask my advisor what to put in texts where I
motivate why our field is important, in the standard answer he gives me
the following pops up:</p>
<blockquote>
Group cohomology is important, since it is a field where topological
methods can be tested reasonably safely, since we have the group
theoretic attack vectors in addition to the purely topological.</blockquote>
<p>On the other hand, group cohomology also turns out to be important,
since we get important information for the study of groups out of the
homological algebra side of things.</p>
</div>
<div class="section" id="low-order-ext">
<h3>Low order Ext</h3>
<p>The area where this is most notable is in representation theory. This
field comes in several flavours: group representations, where we study
kG-Mod for some (sometimes finite) group G; Lie algebra representations,
where we study g-Mod for some Lie algebra g; quiver representations,
where we study kQ-Mod for some (finite) quiver algebra kQ - and so on.
One question that tends to crop up here, and with a high degree of
importance for the non-homological algebraists around me - is what
happens if we know only parts of our group? Can we say something about
the entire group based on that?</p>
<p>It turns out that we can. There are very neat correspondences between
the lower order Ext groups over kG and the behaviour of G itself. I'm
going to stick to group representations here, since that's the area I
know best - however, this is something that pops up analogously all over
the place.</p>
<div class="section" id="extensions">
<h4>Extensions</h4>
<p>Suppose you have some R-module K that you know embeds, in some specific
way, into some larger R-module M. And suppose you find the quotient
L=M/K in some manner. What could, then, M be? One obvious answer is
[tex]G=K\oplus L[/tex], but is this enough? This ends up depending on
Ext<sup>1</sup><sub>R</sub>(L,K), with each element of this particular Ext
group indexing precisely one such extension.</p>
<p>This is at the core of Maschke's theorem, by the way, which says that if
the characteristic of the field k doesn't divide the group order |G|,
then by a specific construction, the <strong>only</strong> extensions possible for
<strong>any</strong> kG-modules are the split extension - the one where we just take
the direct sum.</p>
<p>This all leads to a wealth of useful information and ideas in
representation theory. For instance, there is a way to describe modules
proposed by Dave Benson and some co-authors, where you draw diagrams
with each vertex being a simple module, occupying that spot in a
composition series, and the edges being taken from the relevant
Ext<sup>1</sup>.</p>
</div>
<div class="section" id="invariants-and-coinvariants">
<h4>Invariants and coinvariants</h4>
<p>Suppose you have a group acting on a vector space. This can be taken
extremely physical - quantum mechanics is all about this kind of
situation, or so I've heard. Then it might be interesting to figure out
the invariant subspace: {a|ga=a for all g in G}. This is Ext<sup>0</sup>.
Or we might want a basis for the complement: representatives for every
way that things can move. This is the coinvariant vector space, defined
as A/(ga-a), and this is just Tor<sup>0</sup>.</p>
</div>
<div class="section" id="simples-projectives-and-the-stable-module-category">
<h4>Simples, projectives and the stable module category</h4>
<p>Simple modules are nice. They don't have invariant subspaces. In the
best of all worlds - which is when Maschke holds - simple modules are
precisely the irreducible modules. However, when Maschke doesn't hold,
we can have non-trivial Ext<sup>1</sup>, and thus we can build larger
modules out of simples by a kind of gluing: they aren't just a nice
direct sum of simples, but something ickier.</p>
<p>Thus, unless Maschke holds, there will be weird things happening in the
module category.</p>
<p>These weird things, though, are controllable. To be specific, we can
consider the smallest possible irreducible modules. These will end up
being building blocks, and for nice enough worlds, these will also end
up corresponding closely to the simples - in the way that we can
allocate a simple to an irreducible projective in a bijective manner.</p>
<p>So ... what <em>is</em> this projective I keep throwing around? Take a free
module. This is a direct sum of a finite number of copies of the ring R.
This will have direct summands. By picking apart all summands into
further direct summands, at some point we hit bottom: we cannot pick
anything apart any longer. This is, by the theorem of Krull-Schmidt, a
well-defined state of being. We can permute things, but in essence, a
module is just its decomposition into irreducibles.</p>
<p>So, anything that is a direct sum of a free module is a projective. We
can lose projectivity by taking quotients - so if we add relations, we
may well get lost. But as long as we just look for direct summands,
we're pretty much home free. Now, the irreducible projectives have to be
summands of the ring R itself, so they end up actually being (left)
ideals in the ring. And each of them corresponds intimately to a simple
module.</p>
<p>One trick that's very beloved among the people who worry about these
things is to get rid of anything projective, and look at the stable
module category. In this, we just quotient away anything projective -
morphism sets are taken modulo morphisms that factor through a
projective... This way, we only have the "essential", or as it is known
to the experts of the field "difficult" information left. Then
Ext<sup>n</sup>(M,N)=Hom(Ω:sup:<cite>n</cite>M,N), where Ω<sup>n</sup> is the nth
syzygy - see below for more on this.</p>
<p>So, homological algebra lets us understand the stable module category,
which in turn lets us understand the parts that are essential to the
module category structure.</p>
</div>
<div class="section" id="resolutions">
<h4>Resolutions</h4>
<p>I just promised you I'd tell you about syzygies. First off, some
personal information - because readers always love that!</p>
<p>If you find me on IRC, on EFNet or on Freenode, you'll find me under the
nick Syzygy-. The - is there because there is someone who's been using
Syzygy for years and years on EFNet and because I'm not deliberately
trying to be a bastard if I can help it. The rest of the nick is there
to a certain extent because I like the way I write it in longhand.</p>
<p>And to a certain extent because it is an epitome of why homological
algebra is interesting in my eyes.</p>
<p>Suppose we are interested in a finitely presented module, which we might
be for many reasons, including being interested in algebraic geometry
and in solving systems of polynomial equations. We might then just
figure out what relations hold within a set of generators, which gives
us a generating set, and some relation set.</p>
<p>These, relations, though are far from guaranteed to be the whole story.
It's probable that there are non-trivial relations between the
relations. What do we do? We figure out what these are. They span the
first <strong>syzygy</strong> module of the module we started with, denoted by ΩM.
But this is unlikely to be free, so we can keep on going.</p>
<p>This way, we get a sequence of modules, all of which are free - since we
just choose a generating set in each step - and with maps between them
adding all the extra relations. But this is nothing other than a free
resolution of the starting module. And here comes the candy that hooked
me for this discipline: studying modules over their resolutions is the
same thing as studying what chain complexes are, deep down, which in
turn is the same thing as studying homological algebra.</p>
<p>Want to figure out what a module map means for the family of syzygies?
What you really want is a chain map in the chain complex category. But
some of these maps - or even portions of maps - will not carry relevant
information. So we factor those away, and we get a slightly weirder
category. But here, equality doesn't quite mean what it should, so we
add in more equality relations. And suddenly, we live in a derived
category - and in here, the Hom sets are Ext groups, and the tensor
products are Tor groups.</p>
</div>
</div>
<div class="section" id="number-theory-geometry-and-computation">
<h3>Number theory, geometry, and computation!</h3>
<p>To continue this tour de force, consider the theorems in vector calculus
relating various triple and double integrals. (note - I never dealt with
this. I rode on technicalities to root out calculus from my curriculum
so it would fit more algebra....) These theorems, in the end, only state
that [tex]\partial^2=0[/tex], which is at the core of what homological
algebra is all about.</p>
<p>If we formalize this particular recognition a bit, and tug at the
corners, we end up with de Rham cohomology, which deals with what you
can do with differential forms on manifolds (layman speak: things you
can integrate. The f(x)dx after the integral sign is a typical
differential form) - and this is one of the many many places where
cohomology rather than homology ends up being the "right" way to view
things just because you started out as a geometer instead of .. well ..
topologist or algebraist.</p>
<p>The same kind of thing happens in algebraic geometry as well. You start
out happily with your varieties, you conclude that as soon as things get
interesting, the nice and pretty concepts of coordinate rings don't hold
up, and you're forced to go to coordinate sheaves. And then you try to
figure out what you can do with sheaves of functions on a variety - and
before you know it, you reconstructed sheaf cohomology. This, by the
way, a quick look at wikipedia told me, lets you define euler
characteristics for varieties in a way consistent with the classical
uses of it.</p>
<p>I am no geometer, and I'm not the person to tell you about the
intricacies of these things. If you understand them, though, I'd love to
figure them out at some point.</p>
<p>The discussion of Khovanov homology is a slightly (though not very)
similar thing to this. Again, I have no real idea, and am treading on
thin ice here.</p>
<p>So, alpheccar. Is this what you asked for? Please tell me what more you
want covered, and I'll write up some more! This was fun writing!</p>
</div>
</div>
Blogging seminars - a new fad!2007-07-06T23:46:00+02:00Michitag:blog.mikael.johanssons.org,2007-07-06:blogging-seminars-a-new-fad.html<p>They simply do not end. Now, Cornell grads and pre-grads have started
the <a class="reference external" href="http://cornellmath.wordpress.com/">Everything Seminar</a> - which
has absolutely brilliant discussions about the forbidden minor theorem
in graph theory as well as a fascinating overview over constructing
homological algebra as embedded in the theory of modules over
[tex]\mathbb C[\epsilon]=\mathbb C[x]/(x^2)[/tex].</p>
<p>Connected to this comes the observation that by constructing calculus
using the tricks used in <a class="reference external" href="http://sigfpe.blogspot.com/2006/09/practical-synthetic-differential.html">synthetic differential
geometry</a>,
we end up with - again - modules over [tex]\mathbb C[\epsilon][/tex],
and some very fascinating discussions are sparked as to subtle and
interesting connections between these two viewpoints!</p>
<p>How on earth I am going to keep up with the interesting sprouting
discussion group blogs I shall never know. Maybe it's getting to the
point where we'll start an [tex]A_\infty[/tex]-blog?</p>
Still no new blogging2007-06-28T15:38:00+02:00Michitag:blog.mikael.johanssons.org,2007-06-28:still-no-new-blogging.html<p>Too harried to blog.</p>
<p>Will miss this carnival.</p>
<p>Bugger.</p>
Carnival of Mathematics 102007-06-15T14:49:00+02:00Michitag:blog.mikael.johanssons.org,2007-06-15:carnival-of-mathematics-10.html<p>Is now up at <a class="reference external" href="http://mathnotations.blogspot.com/2007/06/carnival-of-math-tenth-edition.html">Math
Notations</a>.
The current host further suggests a split in undergrad+ and undergrad-
categories - with the simpler and didactics focused posts in one
carnival and the research and/or advanced mathematical posts in another.
Personally, I think the momentum the carnival has is a good thing, and
that a split should wait until we habitually turn away more posts than
we're comfortable with. This volume is not yet actually there -
wherefore I'd be against a split.</p>
<p>Enjoy.</p>
And the published version is...2007-06-12T22:19:00+02:00Michitag:blog.mikael.johanssons.org,2007-06-12:and-the-published-version-is.html<p><a class="reference external" href="http://www.brd.nrw.de/BezRegDdorf/hierarchie/lerntreffs/mathe/pages/magazin/leute/johansson.php">right
here</a>
- at least if you're wondering what happened with my recent
<a class="reference external" href="http://blog.mikael.johanssons.org/archive/2007/04/interview-with-a-blogger/">interview</a>.</p>
Linking in hushed voices2007-06-12T18:49:00+02:00Michitag:blog.mikael.johanssons.org,2007-06-12:linking-in-hushed-voices.html<p><a class="reference external" href="http://unapologetic.wordpress.com/2007/06/12/shh-dont-tell-anybody/">All</a>
the
<a class="reference external" href="http://golem.ph.utexas.edu/category/2007/06/more_mathematical_blogging.html">cool</a>
kids are doing it, so I'll tag in too.</p>
<p>There's a <a class="reference external" href="http://sbseminar.wordpress.com/">secret blogging seminar</a>
going on. And the people seem to be writing interesting things - what
little they managed to write before the hushed rumour mill started.</p>
Enumerating the Saneblidze-Umble diagonal in Haskell2007-06-11T15:47:00+02:00Michitag:blog.mikael.johanssons.org,2007-06-11:enumerating-the-saneblidze-umble-diagonal-in-haskell.html<p><em>IMPORTANT: Note that the implementation herein is severely flawed. Do
not use this.</em></p>
<p>One subject I spent a lot of time thinking about this spring was taking
tensor products of A<sub>∞</sub>-algebras. This turns out to actually
already being solved - having a very combinatorial and pretty neat
solution.</p>
<p>Recall that we can describe ways to associate operations and homotopy of
associators by a sequence of polyhedra K<sub>n</sub>, n=2,3,.., called the
associahedra. An A<sub>∞</sub>-algebra can be defined as being a map from
the <em>cellular chains on the Associahedra</em> to <em>n</em>-ary endomorphisms of a
graded vector space.</p>
<p>If this was incomprehensible to you, no matter for this post. The
essence is that by figuring out how to deal with these polyhedra, we can
figure out how to deal with A<sub>∞</sub>-algebras.</p>
<p>A tensor product of algebras is basically the direct product of the
polyhedra, and we can get precisely the thing we want if only we find a
map from the cellular chains on K<sub>n</sub> to the cellular chains on
K<sub>n</sub>xK<sub>n</sub> fulfilling a few neat properties.</p>
<p>This map is what is known in the literature as the Saneblidze-Umble
diagonal from the people who presented it in a paper from the late 90s.
They describe in that paper, and more clearly in a later paper, how to
actually get hold of one map with the right properties. The road goes
over basic matrices, derived matrices and connecting matrices to trees
describing associations. I'd like to take this opportunity to work
through the implementation I just wrote of an enumerator for these,
displaying the code and discussing it a little bit.</p>
<p>This post will be written as literate Haskell, so copy the text into a
file, and then just save it as <tt class="docutils literal">SaneblidzeUmble.lhs</tt> and you're ready
to go.</p>
<p>So, we will build a module named SaneblidzeUmble that shall contain the
code:</p>
<pre class="literal-block">
> module SaneblidzeUmble where
</pre>
<p>We're going to juggle a lot with lists, and a little with potentially
failing calculations, so we're going to include a couple of standard
utility function sets too.</p>
<pre class="literal-block">
> import Data.List
> import Data.Maybe
</pre>
<div class="section" id="permutations-and-staircase-matrices">
<h2>Permutations and staircase matrices</h2>
<p>The issue we'll want to solve in the end is to enumerate all the terms
entering into a diagonal on K<sub>n</sub>xK<sub>n</sub> for one fixed <em>n</em>.
These terms are, using the later to be mentioned terms, in
correspondence to derived matrices of staircase matrices with entries
from [1,..,n-1]. A <em>staircase matrix</em> is one with one entry for each
diagonal (parallel to the main diagonal), and such that along each row
and column, all entries are adjacent and strictly increasing down and to
the right.</p>
<p>As an example, we have the matrix</p>
<pre class="literal-block">
.14
25.
3..
</pre>
<p>Given a permutation of [1,..,n-1], we can write the images in the order
they occur, and then just take this as a description of the entries from
lower left to upper right of a staircase matrix.</p>
<p>It turns out that the basic matrices involved in this process are in
bijection to permutations of [1,..,n-1] by placing the digits into the
matrix, going up whenever the next entry is lower, and going to the
right whenever it is larger.</p>
<p>Thus, the example above is given by the permutation (32514).</p>
<p>So, to handle this, we need to be able to list all permutations as well
as pick out all rising and all falling part runs of a permutation</p>
<pre class="literal-block">
> type Permutation [Int]
> permutations :: Int -> [Permutation]
> permutations n = permuteList [1..n]
> permuteList :: Eq a => [a] -> [[a]]
> permuteList [] = []
> permuteList [a] = [[a]]
> permuteList list = concatMap (\x -> map (x:) (permuteList (l \\ [x]))) list
</pre>
<div class="line-block">
<div class="line">This is almost readable as it stands. The permutations of [1..n] are
just the permutations of the list [1..n].</div>
<div class="line-block">
<div class="line">There are no permutations of an empty list, the list with one element
has one permutation of itself, and for any larger list we pick out one
element at the time, permute the rest and prepend this element. The
collected results are compiled into a single long list and returned.</div>
</div>
</div>
<p>Solves, by recursion, neatly and with a lot of connection to the
mathematical thought behind it our problems.</p>
<p>Now, for the chopping permutations into monotonic runs, I'll solve it
all in a manner slightly decreasing the amount of code repetition. We
have a "factory function"</p>
<pre class="literal-block">
> monotonic :: (a -> a -> Bool) [a] -> [[a]]
> monotonic _ [] = []
> monotonic _ [x] = [[x]]
> monotonic lt (x:y:etc) = if lt x y
> then (x:s):ss
> else [x]:(s:ss)
> where (s:ss) = monotonic lt (y:etc)
</pre>
<p>and two use cases of this</p>
<pre class="literal-block">
> risers = monotonic (<=)
> fallers = monotonic (>=)
</pre>
<p>Using this, we can build the matrix out of a given permutation. The only
really interesting thing for the diagonal about such a matrix is the
contents of the rows and of the columns, so we shall just use that. This
means we can represent a matrix as the list of rows and the list of
columns. Since row and columns "look basically the same", they'll be
represented by the same type. So we have</p>
<pre class="literal-block">
> type Row [Int]
> buildMatrix :: Permutation -> ([Row],[Row])
> buildMatrix p = (map sort (fallers p), reverse (risers p))
</pre>
<p>The reason we use map sort and reverse is to make the result fit with
the conventions chosen by Saneblidze and Umble. Note: when actually
reading the results, users should take care to read the Columns
backwards.</p>
</div>
<div class="section" id="deriving-staircase-matrices">
<h2>Deriving staircase matrices</h2>
<p>Deriving the matrices is done with so called moves. We have a downward
move, and a rightward move - both consisting of picking a subset of
elements in one row or one column, and pushing them over to the next -
given the condition that all the moved elements have to be larger than
the largest they move over to. They also have to move to empty
positions, but the ordering conditions for the staircase matrices will
guarantee that if they really are larger, then the corresponding
position is already empty.</p>
<p>So, we can write a a function that gives us all the subsets of the first
column/row that might possibly be moved to the second in a given
column/row-list of a matrix</p>
<pre class="literal-block">
> listMovable :: [Row] -> [[Int]]
> listMovable [] = []
> listMovable [a] = []
> listMovable (a:b:as) = filter (not.null) $ subsets (filter (>(m b)) a) where
> m [] = 0
> m b = maximum b
</pre>
<p>This function picks out all the subsets of elements that fulfill the
requirement of being larger than the largest element of the next row.
But hang on - this <tt class="docutils literal">subset</tt>-function actually isn't defined yet. No
worries - it's easily done:</p>
<pre class="literal-block">
> subsets :: [a] -> [[a]]
> subsets [] = [[]]
> subsets (a:as) = map (a:) (subsets as) ++ subsets as
</pre>
<p>Again, we define everything recursively, relying on a base case of
"blinding" simplicity, and end up with a very mathematical looking
description. Here, the subsets of the empty set is just the empty set
itself, and the subsets of any set with one element existing in it
consists of the subsets containing that element and those that do not
contain it. We simply list first those with and then those without.</p>
<p>Now, for the actual deriving move, we might run into a host of trouble,
so we'll just let it be able to return an error condition. It first
checks a few possible error cases, and then writes down the actual move
by picking out the rows it needs, modifying those, and putting it all
together again in the end.</p>
<p>Supposing we have some set <em>m</em> we want to move, and we have a list of
the rows where it could be moved, then we'll want to first check whether
all of <em>m</em> is in a single row together. Thus, for each row, we pick out
all elements that are in our set <em>m</em>. But we don't really care for what
the elements are - we only need the number in each. So we compile a list
of the lengths of these filtered out pieces.</p>
<p>We shall require the number of rows that have all the pieces we want to
move to be 1, and also for that single row to have a good position
within the matrix.</p>
<pre class="literal-block">
> moveDerivate :: [Row] -> [Int] -> Maybe [Row]
> moveDerivate rows [] = Nothing
> moveDerivate rows m = returnVal where
> movableInRow = map (filter (flip elem m)) tr
> numberMovableInRow = map length movableInRow
> countRowsWithMovable = length (filter (==length m) numberMovableInRow)
> indexRow = findIndex (==length m) numberMovableInRow
> returnVal
> | countRowsWithMovable /= 1 = Nothing
> | isNothing indexRow = Nothing
> | index > length rows - 2 = Nothing
> | minimum m < maximum (rows !! index) = Nothing
> | otherwise = moveThem rows m index
> where
> index = fromJust indexRow
> moveThem rows m idx = Just $
> (take index rows) ++
> [(rows !! index) \\ m] ++
> [(sort (rows !! (index + 1) ++ m)] ++
> (drop (index+2) rows)
</pre>
<p>That monstrous expression at the end is known as a guard expression. It
works through the cases until something actually is true, and then
returns whatever is after the equal sign as the value. The conditions I
listed weed out all the pathologies, and lets the program pick out
something sensible whenever possible.</p>
<p>So, we can list all possibilities of one row, and we can move them. This
should be enough to simply generate all derived matrices, right? Let's
start simple. Let's start with all we can do on one side of the pair -
either all possible down moves or all possible right moves (they are
essentially the same anyway...)</p>
<pre class="literal-block">
> derivedRows :: [Rows] -> [[Rows]]
> derivedRows m = derivedAcc [] [m] where
> derivedAcc out [] = out
> derivedAcc out (q:queue) = derivedAcc (out++[q]) (queue++rest) where
> movables = (concatMap listMovable . tails) q
> rest = mapMaybe (moveDerivate q) movables
</pre>
<p>Guiding you through this piece of code - when encountered with a list of
rows, we look at each starting point, list all the possibly movable sets
at that point, and move the corresponding sets, placing all the results
at the back of a queue that we work through. Once the queue is empty, we
know we have worked through all the results from all the moves we could
find, and that no others can be found.</p>
<p>So, if we have our pair of row-lists? We first move to the right, and
then down. At each point, we keep the other half fixed, and simply add
it on to all the results of moving in the first part.</p>
<pre class="literal-block">
> derivedMatrices :: ([Rows],[Rows]) -> [([Rows],[Rows])]
> derivedMatrices matrix@(m1,m2) = nub (derivedFirst [] [matrix]) where
> derivedFirst matrices [] = derivedSecond [] matrices
> derivedFirst out ((q1,q2):queue) = derivedFirst (out++new) queue where
> new = map (flip (,) q2) (derivedRows q1)
> derivedSecond matrices [] = matrices
> derivedSecond out ((q1,q2):queue) = derivedSecond (out++new) queue where
> new = map ((,) q1) (derivedRows q2)
</pre>
<p>This first works through one direction, taking all derived rows, and
tacking on the constant second half, then when that's done, changes to
the same but for the other half of the pair.</p>
<p>Now, we're almost able to state what a specific diagonal looks like.
There is only one more thing to handle: we might not want the results of
a particular computation. More specifically, we recognize something we
do not want by it having a row that isn't consecutive, not even when
including previously occurring rows. So, we need to check that.</p>
<pre class="literal-block">
> checkTree :: [Row] -> Bool
> checkTree rows = checkAcc [] rows where
> checkAcc seen [] = True
> checkAcc seen ([]:morerows) = checkAcc seen morerows
> checkAcc seen (row:morerows)
> | isOK = checkAcc newseen morerows
> | otherwise = False
> where
> newseen = seen ++ row
> minimal = minimum newseen
> maximal = maximum newseen
> length = maximal-minimal+1
> isOK = (length == length newseen)
</pre>
<p>Now it's just a matter of putting the pieces together</p>
<pre class="literal-block">
> diagonalTerms :: Int -> [([Row],[Row])]
> diagonalTerms n = filter checkTrees allMatrices where
> checkTrees (t1,t2) = checkTree t1 && checkTree (reverse t2)
> allMatrices = concatMap derivedMatrices matrices
> matrices = map buildMatrix (permutations n)
</pre>
<p>Previously, a student of Ron Umble has done an implementation of the
same task in C++. This cannot, according to Umble, handle anything
larger than n=5. This Haskell implementation, while being surprisingly
short and readable, has on my (64 bit, modern, high memory) workstation
been able to meet the following execution parameters:</p>
<table border="1" class="docutils">
<colgroup>
<col width="9%" />
<col width="33%" />
<col width="32%" />
<col width="26%" />
</colgroup>
<thead valign="bottom">
<tr><th class="head">n</th>
<th class="head">number of terms</th>
<th class="head">execution time</th>
<th class="head">memory used</th>
</tr>
</thead>
<tbody valign="top">
<tr><td>5</td>
<td>46</td>
<td>0.02 s</td>
<td>4M</td>
</tr>
<tr><td>6</td>
<td>124</td>
<td>0.33 s</td>
<td>62M</td>
</tr>
<tr><td>7</td>
<td>335</td>
<td>8.77 s</td>
<td>1.9G</td>
</tr>
<tr><td>8</td>
<td>911</td>
<td>1020 s</td>
<td>441G</td>
</tr>
</tbody>
</table>
</div>
Own a funny tshirt? Please leave our country.2007-06-08T12:12:00+02:00Michitag:blog.mikael.johanssons.org,2007-06-08:own-a-funny-tshirt-please-leave-our-country.html<p>So, Heiligendamm just outside Rostock in northern Germany these days
hosts both the G8 meeting and the numerous protest activities. This
setup would have me ranting on and on about the violent left and failure
to admonish extremists on your own side.</p>
<p>But that is not the issue that makes me reach for my keyboard.</p>
<p>Swedish news outlets report today about <a class="reference external" href="http://www.dn.se/DNet/jsp/polopoly.jsp?d=148&a=658763">Tomas Eriksson, a swedish
lawyer</a> who
came on the ferry from Trelleborg with his girlfriend yesterday morning.</p>
<p>In the entry checks, the German border officials found a t-shirt in the
girlfriends luggage, with the symbol of the swedish political
pro-media-piracy lobbying organisation "Piratbyr</p>
T'bilisi and Carnival # 92007-06-02T22:35:00+02:00Michitag:blog.mikael.johanssons.org,2007-06-02:tbilisi-and-carnival-9.html<p>First off, I have a travellog of sorts over at my
<a class="reference external" href="http://michiexile.livejournal.com/">Livejournal</a>.</p>
<p>Further on, <a class="reference external" href="http://jd2718.wordpress.com/">jd2718</a> just posted the
newest <a class="reference external" href="http://jd2718.wordpress.com/2007/06/02/carnival-of-mathematics-ix/">Carnival of
Mathematics</a>,
as so often otherwise, I do feature.</p>
Going to T'bilisi2007-05-25T13:37:00+02:00Michitag:blog.mikael.johanssons.org,2007-05-25:going-to-tbilisi.html<p>In about 23 hours, I'll step on to the train in Jena, heading for
T'bilisi, Georgia.</p>
<p>On Monday, I'll give a talk on my research into
[tex]A_\infty[/tex]-structures in group cohomology. If you're curious,
I already put the
<a class="reference external" href="http://www.minet.uni-jena.de/~mik/tbilisi.pdf">slides</a> up on the
web.</p>
<p>I'll try to blog from T'bilisi, but I don't know what connectivity I'll
have at all.</p>
8th Carnival of Mathematics2007-05-19T09:40:00+02:00Michitag:blog.mikael.johanssons.org,2007-05-19:8th-carnival-of-mathematics.html<p>Is up at the
<a class="reference external" href="http://geomblog.blogspot.com/2007/05/8th-carnival-of-mathematics.html">GeomBlog</a>.</p>
<p>This fortnight has a lot of goodies, among those a call for reading
Grothendieck and a blogpost by Ian Stewart.</p>
<p>Go read.</p>
7th Carnival of Mathematics2007-05-04T19:15:00+02:00Michitag:blog.mikael.johanssons.org,2007-05-04:7th-carnival-of-mathematics.html<p>I have been somewhat remiss in announcing these lately - but over at
<a class="reference external" href="http://www.nonoscience.info">nOnoscience</a>, the <a class="reference external" href="http://www.nonoscience.info/2007/05/04/carnival-of-mathematics-edition-7/">7th Carnival of
Mathematics</a>
just got posted.</p>
<p>I'm featured again - as are many other very readable bloggers. Go. Read.</p>
Young Topology: The fundamental groupoid2007-05-04T15:26:00+02:00Michitag:blog.mikael.johanssons.org,2007-05-04:young-topology-the-fundamental-groupoid.html<p>Today, with my <a class="reference external" href="http://blog.mikael.johanssons.org/archive/2007/03/bright-students-and-topology/">bright topology
9th-graders</a>,
we discussed homotopy equivalence of spaces and the fundamental
groupoid. In order to get the arguments sorted out, and also in order to
give my esteemed readership a chance to see what I'm doing with them,
I'll write out some of the arguments here.</p>
<p>I will straight off assume that continuity is something everyone's
comfortable with, and build on top of that.</p>
<div class="section" id="homotopies-and-homotopy-equivalences">
<h2>Homotopies and homotopy equivalences</h2>
<p>We say that two continuous maps, f,g:X→Y between topological spaces are
homotopical, and write [tex]f\simeq g[/tex], if there is a continuous
map [tex]H\colon X\times[0,1]\to Y[/tex] such that H(x,0)=f(x) and
H(x,1)=g(x). This captures the intuitive idea of step by step nudging
one map into the other in formal terms.</p>
<p>Two spaces X,Y are homeomorphic if there are maps [tex]f\colon X\to
Y[/tex],[tex]f^{-1}\colon Y\to X[/tex] such that
[tex]ff^{-1}=\operator{Id}_Y[/tex] and
[tex]f^{-1}f=\operator{Id}_X[/tex].</p>
<p>Two spaces X,Y are homotopy equivalent if there are maps [tex]f\colon
X\to Y[/tex],[tex]f^{-1}\colon Y\to X[/tex] such that
[tex]ff^{-1}\simeq\operator{Id}_Y[/tex] and
[tex]f^{-1}f\simeq\operator{Id}_X[/tex].</p>
<p>Now, if f,g are maps X→Y and f=g, then [tex]f\simeq g[/tex], since we
can just set H(x,t)=f(x)=g(x) for all t, and get a continuous map out of
it. Thus homeomorphic spaces are homotopy equivalent, since the relevant
maps are equal, and thus homotopic.</p>
<p>There are a couple of more properties for homotopic maps we'll want. It
respects composition - so if [tex]f\simeq g\colon X\to Y[/tex] and
h:Y→Z and e:W→X then [tex]hf\simeq hg[/tex] and [tex]fe\simeq
ge[/tex]. This can be seen by considering h(H(x,t)) and H(e(x),t)
respectively.</p>
<p>Denote by D<sup>2</sup> the unit disc in [tex]\mathbb R^2[/tex], and by
{*} the subset {(0,0)} in [tex]\mathbb R^2[/tex]. Then
[tex]D^2\simeq\{*\}[/tex]. In one direction, the relevant map is
just the embedding, and in the other direction, it collapses all of
D<sup>2</sup> onto {*}. One of the two relevant compositions is trivially
equal the identity map, and in the other direction, the linear homotopy
H(x,t)=tx will do well. Thus the disc and the one point space are
homotopy equivalent.</p>
</div>
<div class="section" id="the-fundamental-groupoid">
<h2>The fundamental groupoid</h2>
<div class="line-block">
<div class="line">Let X be a topological space (most probably with a number of neat
properties - I will not list just what properties are needed though),
and consider for each pair x,y of points in X, the set [x,y] of
homotopy classes of paths from the point x to the point y. A path,
here, is a continuous map [0,1]→X. We can compose classes - if
[tex]\gamma\in[x,y][/tex] and [tex]\gamma'\in[y,z][/tex], then we
can consider the map</div>
<div class="line-block">
<div class="line">[tex]\gamma\gamma'(t)=\begin{cases}\gamma(2t)&0\leq
t<1/2\\\gamma'(2t-1)&1/2\leq t\leq1\end{cases}[/tex]. This is a
path from x to z, and so belongs to a class in [x,z]. This class is
well defined from the choices of γ, γ' since homotopies and
composition of maps work well together.</div>
</div>
</div>
<p>This gives us a composition. It is associative - on homotopy classes.
What happens if we look at maps instead of homotopy classes is part of
the subject of my own research. It has an identity at each point x - the
constant path γ(t)=x, and for each class in [x,y] there is a class in
[y,x] such that their composition is homotopic to the constant path in
[x,x].</p>
<p>Thus, we get a groupoid. This is called the <em>fundamental groupoid</em>, and
denoted by [tex]\pi_1(X)[/tex]. If we fix a point, and consider [x,x],
then this is a group, called the <em>fundamental group with basepoint x</em>,
and denoted by [tex]\pi_1(X,x)[/tex].</p>
<p>For [tex]\mathbb R^n[/tex], a linear homotopy will make any two paths
in [x,y] homotopic - and so |[x,y]|=1 in [tex]\pi_1(\mathbb
R^n)[/tex] for any x,y.</p>
<p>For S<sup>1</sup> - the circle - we can choose to view it as [0,1]/(0=1).
Then we can consider the paths f<sub>m</sub>(t)=a(1-t)+bt+nt. This is a
path from a to b, and it winds n times around the circle. Each path in
[a,b] is homotopic to a f<sub>m</sub>, by a linear homotopy, which just
rescales the speeds through various bits and pieces, and possibly
straightens out when you double back. Thus, [tex][a,b]=\mathbb Z[/tex].
Furthermore, if you compose f<sub>m</sub>f<sub>n</sub>, you'll get
f<sub>n+m</sub>.</p>
<p>If we pick out the fundamental group out of this groupoid, we'll get the
well known fundamental group [tex]\pi_1(S^1,p)=\mathbb Z[/tex].</p>
<div class="line-block">
<div class="line">Now, suppose we have two homotopy equivalent spaces X and Y, with the
homotopy equivalence given by f:X→Y and g:Y→X. Then consider the map
f<sub>*</sub>:[x,y]:sub:<cite>X</cite>→[f(x),f(y)]<sub>Y</sub> given by
f<sub>*</sub>γ(t)=f(γ(t)). I claim</div>
<div class="line-block">
<div class="line">1) f<sub>*</sub> is bijective.</div>
<div class="line">2) f<sub>*</sub> works well with composition of classes.</div>
</div>
</div>
<div class="line-block">
<div class="line">For bijectivity we start with injectivity in one direction. Consider
two paths [tex]\gamma\not\simeq\gamma'[/tex] in [x,y]. We need to
show [tex]f_*\gamma\not\simeq f_*\gamma'[/tex]. If
[tex]f_*\gamma\simeq f_*\gamma'[/tex], then
[tex]g_*f_*\gamma\simeq g_*f_*\gamma'[/tex]. However, then</div>
<div class="line-block">
<div class="line">[tex]\gamma\simeq g_*f_*\gamma\simeq
g_*f_*\gamma'\simeq\gamma'[/tex]</div>
<div class="line">which contradicts [tex]\gamma\not\simeq\gamma'[/tex]. Thus
[tex]g_*f_*\gamma\not\simeq g_*f_*\gamma'[/tex], and so
also [tex]f_*\gamma\not\simeq f_*\gamma'[/tex].</div>
</div>
</div>
<p>The proof is symmetric in the choice of direction, and so we can just
repeat the same argument to get that g<sub>*</sub> is also an injection.
Thus we can conclude that f<sub>*</sub> is in fact a bijection.</p>
<p>Now, for the second part, we consider [tex]\gamma\in[x,y][/tex] and
[tex]\delta\in[y,z][/tex]. We need to show that
[tex]f_*(\gamma\delta)=f_*\gamma f_*\delta[/tex]. But
[tex]\gamma\delta[/tex] is the path that first runs through
[tex]\gamma[/tex] in half the time, then runs through
[tex]\delta[/tex] in the rest of the time, and
[tex]f_*(\gamma\delta)[/tex] just transports this path point by
point to Y. And [tex]f_*\gamma[/tex] transports [tex]\gamma[/tex]
point by point to Y and [tex]f_*\delta[/tex] transports
[tex]\delta[/tex] point by point to Y, and [tex]f_*\gamma
f_*\delta[/tex] just runs through the first of these in half the
time, then the rest in the rest of the time.</p>
<p>Thus, homotopy equivalent spaces have the same fundamental groupoid.</p>
</div>
Interview with a blogger2007-04-30T16:40:00+02:00Michitag:blog.mikael.johanssons.org,2007-04-30:interview-with-a-blogger.html<p>The website/forumsite
<a class="reference external" href="http://www.brd.nrw.de/BezRegDdorf/hierarchie/lerntreffs/mathe/structure/home/homepage.php">Mathetreff</a>,
run by the Bezirksregierung (region government) D</p>
Modular representation theory - when Maschke breaks down2007-04-21T13:43:00+02:00Michitag:blog.mikael.johanssons.org,2007-04-21:modular-representation-theory-when-maschke-breaks-down.html<p>This post is dedicated to Janine K</p>
looksay - today's Haskell snippet2007-04-18T09:34:00+02:00Michitag:blog.mikael.johanssons.org,2007-04-18:looksay-todays-haskell-snippet.html<pre class="literal-block">
nextLookSay = foldr (\xs -> ([length xs, head xs]++)) [] . group
lookSay = iterate nextLookSay [1]
</pre>
<p><a class="reference external" href="http://en.wikipedia.org/wiki/Look_and_say_sequence">Conway's Look-and-say
sequence</a></p>
iTeX2MML not activated2007-04-06T14:26:00+02:00Michitag:blog.mikael.johanssons.org,2007-04-06:itex2mml-activated.html<p>I just tried installing the <a class="reference external" href="http://golem.ph.utexas.edu/~distler/blog/archives/000367.html">iTeX2MML
plugin</a>
from <a class="reference external" href="http://golem.ph.utexas.edu/~distler">Jacques Distler</a>. This is
what the <a class="reference external" href="http://golem.ph.utexas.edu/category/">n-Category Cafe</a> use
for their mathematics, and it gives a neated display than the
LaTeXrender plugin I've been using so far.</p>
<div class="line-block">
<div class="line">It turns out, though, that</div>
<div class="line-block">
<div class="line">1. The plugin jumps on quoted perl code, interpreting it as
mathematics. Bad things ensue.</div>
<div class="line">2. It needs valid XHTML, which has not been a priority so far - and
trying to validate it, the validator chokes on the &'s in my LaTeX
array expressions for LaTeXrender.</div>
</div>
</div>
<p>Oh bugger. No iTeX and MathML for me.</p>
Modular representation theory: Simple and semisimple objects2007-04-02T15:56:00+02:00Michitag:blog.mikael.johanssons.org,2007-04-02:modular-representation-theory-simple-and-semisimple-objects.html<div class="section" id="representations-of-categories">
<h2>Representations of categories</h2>
<p>The basic tenet of representation theory is that we have some entity -
the group representation theory takes a
<a class="reference external" href="http://en.wikipedia.org/wiki/Group_%28mathematics%29">group</a>, the
algebra representation theory most often a
<a class="reference external" href="http://en.wikipedia.org/wiki/Quiver_%28mathematics%29">quiver</a>, and
we look at ways to view the elements of the structure as endomorphisms
of some vectorspace. The attentive reader remembers <a class="reference external" href="http://blog.mikael.johanssons.org/archive/2007/03/representation-theory-basics/">my last
post</a>
on the subject, where [tex]\mathbb R^2[/tex] was given a group action
by the rotations and reflections of a polygon surrounding the origin.</p>
<p>There is a way, suggested to me by Miles Gould in my last post, to unify
all these various ways of looking at things in one: groups - and quivers
- and most other things we care to look at are in fact very small
<a class="reference external" href="http://en.wikipedia.org/wiki/Category_%28mathematics%29">category</a>.
For groups, it's a category with one object, and one morphism/arrow for
each group element, and such that the arrow you get by composing two
arrows is the one corresponding to a suitable product in the group. A
quiver is just a category corresponding to the way the quiver looks -
with objects for all vertices, and arrows for all edges.</p>
<p>Given a category C, we can represent it, by just forming a functor
F:C->Vect<sub>k</sub> - which is to say we choose a vector space over the
field k for each object in the category, and a linear map for each
arrow.</p>
<p>For the case of group representations, the group as a category has just
the one object, and so we choose one single vector space. Then the group
has a whole bunch of arrows - and these all need to correspond to linear
maps within this vector space.</p>
<p>So, in the end, since all representations theories are in some way the
same, and indeed, there's a bunch of features that carry across several
theories.</p>
</div>
<div class="section" id="the-structure-of-modules">
<h2>The structure of modules</h2>
<p>As we saw above, representations are more or less the same thing. Which
thing this same thing is, becomes apparent if we introduce some more
algebraic terminology.</p>
<p>Suppose <em>A</em> is some algebraic structure over a field <em>k</em>, which is to
say that <em>A</em> is a vectorspace over <em>k</em>, with some set of operations.
Typical instances are associative algebras, Lie algebras, A<sub>∞</sub>-
or L<sub>∞</sub>-structures et.c. Then, we say that a module <em>M</em> over the
algebra <em>A</em> is a vectorspace over <em>k</em>, with the operations of the
algebra <em>A</em> adjusted to work as operations on the module as well.</p>
<p>This, however, is formulated now to a generality that kills off the
statement, so I'm going to specialise in order to have something decent
to talk about.</p>
<p>So, a <em>module</em> over the associative algebra <em>A</em> is a vectorspace <em>M</em>,
together with a multiplication [tex]A\otimes M\to M[/tex] that ends up
being associative, and distributive, in comfortable ways.</p>
<p>But this is nothing other than saying that we can apply the elements of
the associative algebra to the elements of the module in order to get
new elements of the module. Thus, being a module over an algebra means
that the algebra transforms the module, and so a module is a
representation of the algebra.</p>
<p>This means that in order to understand group representations, we want to
understand modules over the group algebra - instead of considering
embeddings of the group as linear maps within a vector space, we
consider linear combinations of group elements as objects acting on a
vector space, forming a module.</p>
<div class="section" id="building-blocks-of-modules">
<h3>Building blocks of modules</h3>
<p>So, our grand question thus is: if we know that a certain vector space
is a module over something or other, what can we do? What can we say
about the way this works?</p>
<p>First of all, we could try to figure out what kind of building blocks
they might have. One obvious first step is given through the
Krull-Schmidt theorem. This states that if A is an associative algebra,
and M is an A-module, then the decomposition of M into a direct sum of
submodules that cannot be further decomposed that way is unique.</p>
<p>So, any module is a direct sum of indecomposable modules. And the module
action of the ring, thus, is given by block matrices with (possibly)
nonzero blocks on the diagonal, corresponding to the indecomposable
submodules, and zeroes everywhere else. The proof of this theoerm
follows by induction, reasoning with dimensions of vector spaces and
with juggling the a few linear maps back and forth until everything
fits.</p>
<p>As a corollary, we also can cancel modules in direct sum expressions.
That is, if [tex]M\oplus U\cong M\oplus V[/tex], then [tex]U\cong
V[/tex].</p>
<p>Under good circumstances, any such building block - any indecomposable
module - would also be simple, which is to say that it has no subspace
invariant under the action. If that is the case, then the fun ends right
about here, and all we need to care about is the structure of simple
modules, and then everything is done.</p>
<p>For the modular case though - such as group representations over a field
of characteristic <em>p</em>, where <em>p</em> divides the group order, we're not that
lucky. We end up with a situation where simples and indecomposables are
different. We can still use Krull-Schmidt, and reduce down to
indecomposable modules, but the indecomposable modules may very well be
quite complex in terms of simple modules. So we'd need to study both.
We'll start, however, with the part that's reasonably easy to study.</p>
</div>
<div class="section" id="simple-modules">
<h3>Simple modules</h3>
<p>A simple module is a module without submodules - that is, a vectorspace
with some action of the algebra, that doesn't have invariant subspaces.</p>
<p>If we figure out what these look like, we're a good step forward on our
path to understanding what modules can look like. In the ordinary case,
we're in some ways done - since simples are indecomposables, and any
other module a direct sum of such. For the modular case, though, there's
still work to be done.</p>
<p>But let us start with the very basics. Suppose A is a finite-dimensional
algebra with unit over a good enough field of characteristic p. A module
M is said to be simple if the only submodules of M are M and 0. It is
said to be <em>semisimple</em> if it is a direct sum of simple modules. An
algebra is simple or semisimple if it is this as a module over itself.</p>
<p>Now, submodules of semisimple modules are just selections of the simple
modules that make out the supermodule - since otherwise, we could
intersect the submodule with a summand, and get a submodule. This is
impossible, since the smallest summands are simple.</p>
<p>In semisimple modules, every submodule is a direct summand. This means
that simple modules cannot glue together within semisimple modules - all
building blocks are distinct and have well defined boundaries.</p>
<div class="line-block">
<div class="line">There are two important tools we use to analyze algebras. One is the
radical, and the other is the socle. Both of these are given by a
bunch of equivalent descriptions:</div>
<div class="line-block">
<div class="line">The radical rad(A) of the algebra A is</div>
</div>
</div>
<ul class="simple">
<li>The smallest submodule I in A such that A/I is semisimple</li>
<li>The intersection of all the maximal submodules of A</li>
<li>The largest nilpotent ideal of A</li>
</ul>
<p>The radical of a module M is rad(A)</p>
</div>
</div>
Sudden responsibilities2007-03-28T17:41:00+02:00Michitag:blog.mikael.johanssons.org,2007-03-28:sudden-responsibilities.html<p>I just met up with the workgroup in the Deutsche Mathematikervereinigung
(German Association of Mathematicians) with interest spanning
"Information and Communication" - which turns out to mean that they care
about libraries, about communicative tools for mathematicians, and spend
their time thinking about these things, and meeting at conferences.</p>
<p>Met up with is to say participated in their workshop.</p>
<p>Stunned them all with the relevation that I blog. "Wow, a real, live
blogger! Here!"</p>
<p>And promptly got elected to their executive committee.</p>
<p>Kinda unexpected result of going to a conference.</p>
4th Carnival of Mathematics posted2007-03-23T13:05:00+01:00Michitag:blog.mikael.johanssons.org,2007-03-23:4th-carnival-of-mathematics-posted.html<p>The <a class="reference external" href="http://scienceblogs.com/evolutionblog/2007/03/the_carnival_of_mathematics_1.php">fourth Carnival of
Mathematics</a>
is up at <a class="reference external" href="http://scienceblogs.com/evolutionblog/">EvolutionBlog</a>.</p>
<p>Featured this time around are homological algebra, representation
theory, Rubik's cube solutions, Bernoulli processes, topology, number
theory, and much much more.</p>
Representation theory - basics2007-03-21T11:42:00+01:00Michitag:blog.mikael.johanssons.org,2007-03-21:representation-theory-basics.html<p>Many interesting
<a class="reference external" href="http://en.wikipedia.org/Group_%28mathematics%29">groups</a> have a very
geometrical definition: transformations that fix certain symmetries is
one of the historical origins of group theory.</p>
<div class="line-block">
<div class="line">Thus, one of the most interesting classes of finite groups are the
rotation and reflection symmetries of a regular polygon. These are
called [tex]D_n[/tex], for a <em>n</em>-sided polygon. Thus, for a triangle,
we can label the corners <em>a,b,c</em>, reading clockwise, and enumerate the
possible transformations by the positions the corners end up in. Thus
we get the elements:</div>
<div class="line-block">
<div class="line">identity: abc -> abc</div>
<div class="line">τ: abc -> bca (rotation by 120 degrees)</div>
<div class="line">τ<sup>2</sup>: abc -> cab</div>
<div class="line">σ: abc -> acb (reflection fixing a)</div>
<div class="line">στ: abc -> bac</div>
<div class="line">στ<sup>2</sup>: abc -> cba</div>
</div>
</div>
<div class="line-block">
<div class="line">Now, if we fix one equilateral triangle - say the one spanned by the
points [tex]a=(1,0)[/tex], [tex]b=(-\frac12,\frac{\sqrt3}2)[/tex]
and [tex]c=(-\frac12,-\frac{\sqrt3}2)[/tex], then these
transformations of the triangle can be extended to rotations and
reflections of the entire 2-dimensional plane [tex]\mathbb R^2[/tex].
As such, we can write down matrices for the group elements, starting
with</div>
<div class="line-block">
<div class="line">[tex]\sigma=\begin{pmatrix}0&1\\1&0\end{pmatrix}[/tex] and</div>
<div class="line">[tex]\tau=\begin{pmatrix}\frac{\sqrt3}2 & -\frac12 \\
-\frac12 & -\frac{\sqrt3}2\end{pmatrix}[/tex]</div>
<div class="line">The rest of the group elements we can realize as matrices by just
multiplying these with each other.</div>
</div>
</div>
<p>So now, all of a sudden, instead of just the abstract notion of a group,
we have a bunch of transformations of a specific <a class="reference external" href="http://en.wikipedia.org/wiki/Vector_space">vector
space</a>. This is neat,
interesting and well worth study. It gets even better since it turns out
that we can tell a lot about the group itself by studying vector spaces
that it acts on.</p>
<div class="section" id="representations-of-groups-and-rings">
<h2>Representations of groups and rings</h2>
<p>The example we just saw is one of the most obvious examples of a group
representations. There are two kinds of representations very commonly
used - one of them realizes the group as a geometric entity: a set of
transformations on a vector space, and the other - the permutation
representations - realizes the group as a bunch of
<a class="reference external" href="http://en.wikipedia.org/wiki/Permutation">permutations</a>.</p>
<div class="line-block">
<div class="line">For this post, I'm going to ignore the permutation representations to
a large extent, and instead focus on the vector space representations.
These are a special case of a more general entity: the
<a class="reference external" href="http://en.wikipedia.org/wiki/Module_%28mathematics%29">module</a>
over a ring. We can turn any group into a ring by considering linear
combinations of the group elements as formal signs, and defining
multiplication of two terms to be the product of the coefficients
coupled with the product, in the group, of the group elements. Thus,
for our [tex]D_3[/tex], above, we can form [tex]\mathbb RD_3[/tex]
as the set of all expressions on the form</div>
<div class="line-block">
<div class="line">[tex]a_1\cdot
1+a_\tau\cdot\tau+a_{\tau^2}\cdot\tau^2+a_\sigma\cdot\sigma+</div>
<div class="line">a_{\sigma\tau}\cdot\sigma\tau+a_{\sigma\tau^2}\cdot\sigma\tau^2[/tex]</div>
<div class="line">and in this, we get
[tex](2\cdot\tau+3\cdot\sigma)(5\cdot\tau^2+7\cdot\sigma\tau)=</div>
<div class="line">10\cdot1+14\cdot\sigma+15\cdot\sigma\tau^2+21\cdot\tau[/tex]</div>
</div>
</div>
<p>Now, we can view ordinary vector spaces over a field as a bunch of
vectors, and a set of geometric transformations of them; namely "only"
scaling. Multiplication by a scalar stretches the space. In this way,
looking at modules over an algebra over a field is an easy extension:
instead of only allowing stretching, we add other transformations, and
require that these transformations cooperate in a way that is described
by the ring we represent. Thus, we have some set of transformations of
the space that correspond to the ring elements, and require that
multiplying ring elements corresponds to composing transformations.</p>
<p>With the group ring described above, we thus get what we expect for the
group ring modules: they have transformations corresponding to the
elements of the group rings, and composing transformations corresponds
to multiplying elements of the group ring.</p>
<p>The benefit of this level of abstraction is that all of a sudden, we can
start building these geometric objects from other things than just
groups: finitely presented algebras, <a class="reference external" href="http://en.wikipedia.org/wiki/Quiver_%28mathematics%29">quiver
algebras</a>,
<a class="reference external" href="http://en.wikipedia.org/wiki/Category_%28mathematics%29">categories</a>,
<a class="reference external" href="http://en.wikipedia.org/wiki/Partially_ordered_set">partial orders</a>
- to just mention a few of the things we can study representations of.
For each of these, the representation theory is the study of the
structure of modules over the structures - i.e. ways to manipulate the
space with elements from the structure.</p>
<p>But, for now, back to the group case. Given the example above, it's not
entirely clear whether all group elements must be distinct in the
representation - so let me immediately state an answer to this: we only
require group elements to be assigned to any linear transformation in a
way that is coherent with the rest. Thus any group has the trivial
representation, where any group element is mapped to the identity map of
the vector space.</p>
</div>
<div class="section" id="ordinary-contra-modular-representations">
<h2>Ordinary contra modular representations</h2>
<div class="line-block">
<div class="line">We distinguish between two types of representation theory: ordinary
and modular. In the ordinary case, we have a lucky break, namely</div>
<div class="line-block">
<div class="line"><strong>Maschke's theorem</strong>: <em>If θ is a representation of a group G in a
vector space V over a field k - i.e. a map that takes each group
element to a linear map V->V, and the characteristic of k does not
divide |G|, then any invariant subspace U in V has a complement W
such that [tex]V=U\oplus W[/tex] and the transformations from G keep
inside each component.</em></div>
</div>
</div>
<p>This means, furthermore, that every matrix representing a group element
can be written on a form with blocks chained along the diagonal and
zeroes everywhere else</p>
<p>This has a couple of alternative formulations. One of the formulations
that I like the most is that the higher Ext-groups always vanish if the
characteristic of the field doesn't divide the group order.</p>
<p>We can always have the case that we cannot find invariant subspaces
under the group action - in this case we have what is called a <em>simple
module</em>. By Maschke's theorem, this is the same thing as being
<em>irreducible</em> - which means that we cannot decompose it into a direct
sum, or equivalently that the matrix representations have only one
block, filling out the entire matrix. Thus the simple modules end up
being the building blocks of modules - and Maschke's theorem tells us
that when we build modules, the building blocks remain distinct.</p>
<p>Because we have Maschke's theorem for the ordinary case, things turn out
to be reasonably easy, and we end up studying mainly the trace of the
matrices that the group elements map to. This turns out to be
independent of the basis choice for the space we're looking at, and a
very important invariant of nice representation situations - called the
<em>character</em>.</p>
<p>Suppose the condition in Maschke's theorem doesn't hold. That is, we
have a field k of characteristic p - so that in that field 0=1+1+...+1
(p times). Over this field, we throw up a vector space, and we make sure
that a finite group with p dividing the group order has some sort of
action on the vector space - i.e. we have transformations corresponding
to the group elements.</p>
<p>We cannot trust that the simple modules glue together neatly - it's
throughout probable that they end up sticking together tighter than
expected. All is not lost, however. First off, the ways that the simple
modules can stick together are parametrized by the Ext-groups - or in
topological terms, the cohomology of the group.</p>
<p>Over the coming time I will explore the modular representation theory in
this blog - as I learn the details myself. If you follow these posts,
I'll tell you about decompositions of algebras, about Brauer characters
- which is a way to adapt the character theory that ordinary
representation theory is all about to the modular case, about blocks and
most probably also about Donovan's conjecture.</p>
</div>
Carnival of Mathematics #32007-03-09T10:52:00+01:00Michitag:blog.mikael.johanssons.org,2007-03-09:carnival-of-mathematics-3.html<p>And it is with pride that I welcome you all to my first issue, and the
third issue all in all, of the <a class="reference external" href="http://carnivalofmathematics.wordpress.com">Carnival of
Mathematics</a>. I probably
should apologize as well - my announcement stated March 8th, but that
was before I really looked at the dates involved, so we did, alas, miss
the international women's day. We haven't had quite the rush that <a class="reference external" href="http://scienceblogs.com/goodmath/">Mark
CC</a> enjoyed, but we'll make a good
one even so.</p>
<p>First out, from the first half of our submissions, we have a grand tour
of didactic topics, starting out with Michael Tang, who shows us <a class="reference external" href="http://michaeltang-math.blogspot.com/2007/02/why-negative-times-negative-makes.html">why
negative times negative is
positive</a>,
with a touch of ring theory into the mix. Following that, Rebecca
Newburn <a class="reference external" href="http://msnewburn.wordpress.com/2007/02/20/solving-equations-strategies/">discusses equation solving
strategies</a>
and Laurie Bluedorn <a class="reference external" href="http://www.triviumpursuit.com/blog/2007/03/05/research-on-the-teaching-of-math-formal-arithmetic-at-age-ten-hurried-or-delayed/">takes a historical view on the age of introduction
of formal
arithmetic</a>.
To finish it up, jd2718 tells us about <a class="reference external" href="http://jd2718.wordpress.com/2007/03/08/teaching-math-oops/">teaching complex
numbers</a>
and your humble host has a <a class="reference external" href="http://blog.mikael.johanssons.org/archive/2006/04/why-i-keep-organizing-congresses/">manifesto of sorts about stimulating strong
students</a>.</p>
<p>From the financial mathematics half of our submission, Matthew Paulson
<a class="reference external" href="http://getting-green.blogspot.com/2007/02/are-payday-loans-ever-good-idea.html">analyzes payday
loans</a>.</p>
<p>For the third half of the submissions, we look at humor. Scott Cram
gives us <a class="reference external" href="http://headinside.blogspot.com/2007/03/mathematical-humor.html">mathematical humor
cavalcade</a>,
<a class="reference external" href="http://headinside.blogspot.com/2007/01/fun-with-pi.html">funky tricks with
π</a> and <a class="reference external" href="http://headinside.blogspot.com/2007/01/more-fun-with-pi.html">more
funky tricks with
π</a>,
including how to calculate it with frozen hotdogs. Denise brings us a
<a class="reference external" href="http://letsplaymath.wordpress.com/2007/02/02/all-odd-numbers-are-prime-a-corollary/">corollary to the well-known theorem that all odd numbers are
prime</a>.</p>
<p>The fourth half of our submissions climb the dimensions ladder. Eric
Kidd brings us a <a class="reference external" href="http://www.randomhacks.net/articles/2007/03/07/hefferon-linear-algebra-review">review of a newly released e-book in linear
algebra</a>,
and Lynet discusses <a class="reference external" href="http://elliptica.blogspot.com/2007/02/but-if-you-cant-picture-it.html">visualization and abstraction of higher dimensional
entities</a>.</p>
<p>The fifth half concerns itself with number theory, geometry, topology
and algebra. We start out with Mark Dominus who discusses <a class="reference external" href="http://blog.plover.com/math/partitions-8-13.html">integer
partitions</a> and odd
correspondences. Charles Daney chimes in with <a class="reference external" href="http://scienceandreason.blogspot.com/2007/03/diophantine-equations.html">diophantine
equations</a>
- and we use it immediately, as Foxy takes us along, finding <a class="reference external" href="http://foxmath.blogspot.com/2007/02/magic-squares-and-unsolved-problems.html">square
sums of squares in magic
squares</a>.
For the geometric side, polymath has a <a class="reference external" href="http://polymathematics.typepad.com/polymath/2007/01/proof_of_morley.html">proof of Morley's
theorem</a>,
and trust me: you do want to read the proof before you look up the
theorem. Alon Levy raises the level another notch in a discussion of
<a class="reference external" href="http://abstractnonsense.wordpress.com/2007/03/07/galois-theory-infinite-galois-theory/">infinite galois
theory</a>
- a part of his ongoing series on galois theory. Mark Chu-Carroll steers
us over to the topology side with a discussion of
<a class="reference external" href="http://scienceblogs.com/goodmath/2007/03/homotopy_1.php">homotopy</a> -
one of my own favourite subjects - and follows up with an introduction
to <a class="reference external" href="http://scienceblogs.com/goodmath/2007/03/simplices_and_simplicial_compl.php">simplicial
complexes</a>:
an invaluable tool for algebraic topology. Which brings me to the last
post - also from your humble host. <a class="reference external" href="//blog.mikael.johanssons.org/archive/2006/01/introduction-to-algebraic-topology-and-related-topics-i/">an introduction to algebraic
topology</a>.</p>
<p>The next Carnival of Mathematics will be hosted by Jason Rosenhouse and
the <a class="reference external" href="http://scienceblogs.com/evolutionblog/">Evolution Blog</a>.
Submissions go through Alon Levy at alon_levy1 (at) yahoo.com, through
the <a class="reference external" href="http://blogcarnival.com/bc/cprof_1049.html">submission tool</a> or
could even be forwarded by me - reachable at mikael (at) johanssons.org</p>
Bright students and topology2007-03-02T15:39:00+01:00Michitag:blog.mikael.johanssons.org,2007-03-02:bright-students-and-topology.html<p>Today, I started an experiment together with the local specialised
secondary school. I'll be taking care of two of their brightest
students, meeting them roughly once a week, and taking them on a charge
through algebraic topology. At the far end shimmers knot theory and
other funky applications; and on the way there, I hope for many
interesting spinoffs and avenues.</p>
<p>They got, today, Armstrong's Basic Topology, and an extract from the
German topology book by J</p>
Carnival of mathematics, issue 22007-02-23T22:14:00+01:00Michitag:blog.mikael.johanssons.org,2007-02-23:carnival-of-mathematics-issue-2.html<p>The <a class="reference external" href="http://scienceblogs.com/goodmath/2007/02/the_second_carnival_of_mathema_1.php">second carnival of
mathematics</a>
is up over at <a class="reference external" href="http://scienceblogs.com/goodmath">Good Math, Bad
Math</a>. It's again a nice, good
read. Go.</p>
<p>The third carnival of mathematics is to be hosted by yours truly on the
<a class="reference external" href="http://en.wikipedia.org/wiki/International_Women%27s_Day">International Women's
Day</a> (maybe
we can get a theme going? Women in mathematics, anyone?). Submissions to
<a class="reference external" href="mailto:mikael@johanssons.org">me directly</a> or over the <a class="reference external" href="http://blogcarnival.com/bc/cprof_1049.html">submission
form</a>.</p>
The Order of the Science Scouts of Exemplary Repute and Above Average Physique2007-02-20T18:18:00+01:00Michitag:blog.mikael.johanssons.org,2007-02-20:osseraap.html<p>These badges are also displayed on my About page, but the explanation
why merits a blog post of its own.</p>
<div class="line-block">
<div class="line">`
<img alt="image0" src="http://scq.ubc.ca/sciencescouts/01talk.jpg" />
<<a class="reference external" href="http://scq.ubc.ca/sciencescouts/index.html#1">http://scq.ubc.ca/sciencescouts/index.html#1</a>>`__</div>
<div class="line-block">
<div class="line">For one thing, I blog about the stuff. I also was a radio speaker at
the time that <a class="reference external" href="http://radioufs.com">Radio Unga Forskare</a> was
active.</div>
</div>
</div>
<div class="line-block">
<div class="line">`
<img alt="image1" src="http://scq.ubc.ca/sciencescouts/06blog.jpg" />
<<a class="reference external" href="http://scq.ubc.ca/sciencescouts/index.html#6">http://scq.ubc.ca/sciencescouts/index.html#6</a>>`__</div>
<div class="line-block">
<div class="line">This blog. Do I <em>really</em> need to say more?</div>
</div>
</div>
<div class="line-block">
<div class="line">`
<img alt="image2" src="http://scq.ubc.ca/sciencescouts/08openflame.jpg" />
<<a class="reference external" href="http://scq.ubc.ca/sciencescouts/index.html#8">http://scq.ubc.ca/sciencescouts/index.html#8</a>>`__</div>
<div class="line-block">
<div class="line">I'm an old boy scout. I came 3<sup>rd</sup> in the national quals for
the International Chemistry Olympiad back when. I do know how to
handle a flame in a lab.</div>
</div>
</div>
<div class="line-block">
<div class="line">`
<img alt="image3" src="http://scq.ubc.ca/sciencescouts/19ice1.jpg" />
<<a class="reference external" href="http://scq.ubc.ca/sciencescouts/index.html#20">http://scq.ubc.ca/sciencescouts/index.html#20</a>>`__</div>
<div class="line-block">
<div class="line">Freezer? Sure. Who hasn't?</div>
</div>
</div>
<div class="line-block">
<div class="line">`
<img alt="image4" src="http://scq.ubc.ca/sciencescouts/20ice2.jpg" />
<<a class="reference external" href="http://scq.ubc.ca/sciencescouts/index.html#21">http://scq.ubc.ca/sciencescouts/index.html#21</a>>`__</div>
<div class="line-block">
<div class="line">Dry ice? Of course. Unga Forskare (the association of Young
Scientists) responsible for this.</div>
</div>
</div>
<div class="line-block">
<div class="line">`
<img alt="image5" src="http://scq.ubc.ca/sciencescouts/21ice3.jpg" />
<<a class="reference external" href="http://scq.ubc.ca/sciencescouts/index.html#22">http://scq.ubc.ca/sciencescouts/index.html#22</a>>`__</div>
<div class="line-block">
<div class="line">Liquid nitrogen? Oh yes! Favourite pastime among Unga Forskare.</div>
</div>
</div>
<div class="line-block">
<div class="line">`
<img alt="image6" src="http://scq.ubc.ca/sciencescouts/23computer.jpg" />
<<a class="reference external" href="http://scq.ubc.ca/sciencescouts/index.html#24">http://scq.ubc.ca/sciencescouts/index.html#24</a>>`__</div>
<div class="line-block">
<div class="line">I really do know more computer languages than quite a few people.
Then again, there are some who know more than I do, and sure, most of
those end up reading this blog, so .. well .. call it a draw, shall
we?</div>
</div>
</div>
<div class="line-block">
<div class="line">`
<img alt="image7" src="http://scq.ubc.ca/sciencescouts/26math.jpg" />
<<a class="reference external" href="http://scq.ubc.ca/sciencescouts/index.html#29">http://scq.ubc.ca/sciencescouts/index.html#29</a>>`__</div>
<div class="line-block">
<div class="line">I'm getting paid to do mathematics. Of course I can crush you with my
math prowess. Fear me!</div>
</div>
</div>
<div class="line-block">
<div class="line">`
<img alt="image8" src="http://scq.ubc.ca/sciencescouts/31useless.jpg" />
<<a class="reference external" href="http://scq.ubc.ca/sciencescouts/index.html#32">http://scq.ubc.ca/sciencescouts/index.html#32</a>>`__</div>
<div class="line-block">
<div class="line">Again. I'm a pure mathematician. Algebraist. It's the very paragon of
useless science.</div>
</div>
</div>
Response to Heath Raftery2007-02-09T23:57:00+01:00Michitag:blog.mikael.johanssons.org,2007-02-09:response-to-heath-raftery.html<p>While I'm still on the subject of writing code with the
<a class="reference external" href="http://web.engr.oregonstate.edu/~erwig/pfp/">PFP</a> library, I may as
well join in on a discussion that got pulled into the <a class="reference external" href="http://abstractnonsense.wordpress.com/2007/02/09/carnival-of-mathematics-inaugural-edition/">Carnival of
Mathematics</a>
exposition.</p>
<p>Heath Raftery writes about <a class="reference external" href="http://heath.hrsoftworks.net/archives/000036.html">weird probabilities in dice
discussions</a>, a
problem very much reminiscent of the <a class="reference external" href="http://en.wikipedia.org/wiki/Monty_Hall_problem">Monty Hall
problem,</a> both in
the amount of controversy it generates when people discuss it, and also,
it seems on at least half the people in the discussion so far, in how
much the terms use end up confusing people.</p>
<p>So, in a comment to the Carnival post, the suggestion came from
<strong>Alex</strong>, that a simulation be used to settle the whole question. And
since I just wrote some things with PFP, I might as well write another!</p>
<p>The question, as posed by Heath, is</p>
<blockquote>
Two dice are rolled simultaneously. Given that one die shows a "4",
what is the probability that the total on the uppermost faces of the
two dice is "7"?</blockquote>
<p>so in order to settle it, I'm going to let my computer roll a whole lote
of dice. Whenever at least one die shows a 4, I'm going to tally the
roll, and whenever the sum is 7, I'm going to tally it as a succesful
roll. The quotient of these tallies will be the probabilities.</p>
<p>I end up with the following code</p>
<pre class="literal-block">
import Probability
import Control.Monad
-- A die roll is a uniform random number in [1..6]
die = uniform [1..6::Int]
-- Two dice is just a pair of independently rolled die
twoDice = prod die die
tallyOne :: IO (Int, Int)
tallyOne = do
(d1, d2) <- pick twoDice
let (a,b) | d1 == 4 || d2 == 4 = if d1+d2 == 7 then (1,1) else (1,0)
| otherwise = (0,0)
return (a,b)
sumPair (a,b) (c,d) = (a+c,b+d)
simulate n = (fmap (foldr sumPair (0,0)) . sequence . (take n) . repeat) tallyOne
simulation n = fmap (\(x,y) -> (fromIntegral y)/(fromIntegral x)) $ simulate n
</pre>
<p>which pretty much lets me enter a number of tests to perform, and
returns the resulting quotient of tallied successes by tallied valid
rolls.</p>
<p>While playing around with it, I discover that at about 100 000 rolls,
the execution outgrows the stack, and so I partition the simulation into
pieces of 50 000. This yields me the following</p>
<pre class="literal-block">
*DiceSim> ( fmap (unlines . map show) $ sequence $ take 10
$ repeat $ simulation 50000 ) >>= putStrLn
</pre>
<p>which then -- after several minutes execution time, gives a simulated
dataset, namely</p>
<pre class="literal-block">
0.176424464192368
0.1825110045332107
0.18224936651289714
0.18589282208990382
0.18272166287755764
0.18133159268929505
0.18697201689545934
0.17798213004630536
0.1818062827225131
0.18456002616944717
</pre>
<p>Plugging it into my OpenOffice calculating tool, I retrieve a mean of
0.1822451368729 and a standard deviation of 0.0032384171164.</p>
<div class="line-block">
<div class="line">Comparing these, now, to the numerical values of</div>
<div class="line-block">
<div class="line">1/6 = 0.166666666666...</div>
<div class="line">2/11 = 0.18181818181818181818...</div>
<div class="line">makes me conclude that sorry, but, the thing that's within 0.13σ of
our experimental mean simply feels much more reliable than that within
almost 5σ</div>
</div>
</div>
<p>Thus, in the end, my bets goes with the teacher of Heath: 2/11, and for
about the same reasons as the Monty Hall problem. As an invitation to
Heath: if you disagree with my model, please propose how to code or
pseudocode the model you interpret, so that we can simulate and test it!</p>
More silly random text2007-02-09T17:34:00+01:00Michitag:blog.mikael.johanssons.org,2007-02-09:more-silly-random-text.html<p>Syntaxfree writes over at his blog about a silly little toy he wrote,
using the <a class="reference external" href="http://web.engr.oregonstate.edu/~erwig/pfp/">PFP library</a>,
to <a class="reference external" href="http://syntaxfree.wordpress.com/2007/02/09/some-probabilistic-text-generation/">generate random
text</a>.</p>
<p>Now, his text is unreadable. I mean, it's even unpronounceable. Why?
Because he's looking at bigram distributions of <em>letter</em>.</p>
<p>Great, I thought, I'll do him one better. Random text using bigram
distributions on words must surely be a LOT better than random text
using bigram distributions on letters. At least the words come out
readable, and they may even come out in a decent order.</p>
<p>So I sat down with his code, and hacked, tweaked, and monadized it to
this</p>
<pre class="literal-block">
module Test where
import Probability
import Data.Char
import Control.Monad
filename="kjv.genesis"
bigram t = zip ws (tail ws) where
ws = (words . map toLower . filter (\x -> isAlpha x || isSpace x)) t
distro = uniform . bigram
goal = readFile filename
goalD = fmap distro goal
one = do
gD <- goalD
(a,b) <- pick gD
return [a,b]
many n = fmap unwords $ fmap concat $ sequence $ take n $ repeat one
</pre>
<p>So, I have some corpus -- I just pulled the King James Genesis to have
some sort of body of text to work it, and saved it into the
<em>kjv.genesis</em> file. Then, I can pop over to my beloved GHCi and execute</p>
<pre class="literal-block">
Prelude> :l Test
[1 of 4] Compiling ListUtils ( ListUtils.hs, interpreted )
[2 of 4] Compiling Show ( Show.hs, interpreted )
[3 of 4] Compiling Probability ( Probability.hs, interpreted )
[4 of 4] Compiling Test ( Test.hs, interpreted )
Ok, modules loaded: Show, Test, Probability, ListUtils.
*Test> many 30
Loading package haskell98 ... linking ... done.
"house and it to circumcised all cool of these things daughters from ye go dreamed for stead and in unto god of for wives done to i give god shall bowed himself and shem tower of small and said lord he was called his the days these are thither therefore cainan and and he rachel and hear my sons born"
</pre>
<p>The first execution will take a while, since it has to, y'know, digest
the actual text, calculate distributions, and set everything up.</p>
<p>Subsequent executions also take quite some time, and I'm not at all
certain why. An explanation would be nice, if someone has it.</p>
<p>And for some sample poetry, I give you</p>
<blockquote>
<p>house and it to circumcised all cool of these things daughters from
ye go dreamed for stead and in unto god of for wives done to i give
god shall bowed himself and shem tower of small and said lord he was
called his the days these are thither therefore cainan and and he
rachel and hear my sons born</p>
<p>when isaac dry land unto him isaac and have i children struggled
lord hath had eaten goods which morning was ye shall by the all the
me this naphtali and years old was her to see of the his brother out
of names after they feed thy mothers of ephron god said he put him
into after his the tent</p>
<p>unto us i will to him i will the presence hands to his right name
was possession of days journey down in that thou perizzites and and
he they for god made that is of the twentys sake jacob so itself
after thou and of anah years and in isaac pillar of and the builded
a of canaan noah and</p>
</blockquote>
First carnival of mathematics is up!2007-02-09T13:47:00+01:00Michitag:blog.mikael.johanssons.org,2007-02-09:first-carnival-of-mathematics-is-up.html<p>Over at Alon's place, <a class="reference external" href="http://abstractnonsense.wordpress.com">Abstract
Nonsense</a>, the <a class="reference external" href="http://abstractnonsense.wordpress.com/2007/02/09/carnival-of-mathematics-inaugural-edition/">first
issue</a>
of the forthnightly <a class="reference external" href="http://carnivalofmathematics.wordpress.com/">Carnival of
Mathematics</a> is up.</p>
<p>Go there. Read. There's a LOT of good blog posts there.</p>
D8 revisited2007-02-07T18:36:00+01:00Michitag:blog.mikael.johanssons.org,2007-02-07:d8-revisited.html<p>I have previously <cite>calculated the A:sub:`∞</cite>-structure for the
cohomology ring of
D<sub>8</sub> <<a class="reference external" href="http://blog.mikael.johanssons.org/archive/2006/11/an-a-structure-on-the-cohomology-of-d8/">http://blog.mikael.johanssons.org/archive/2006/11/an-a-structure-on-the-cohomology-of-d8/</a>>`__.
Now, while trying to figure out how to make my work continue from here,
I tried working out what algebra this would have come from, assuming
that I can adapt Keller's higher multiplication theorem to group
algebras.</p>
<p>A success here would be very good news indeed, since for one it would
indicate that such an adaptation should be possible, and for another it
would possibly give me a way to lend strength both to the previous
calculation and to a conjecture I have in the calculation of group
cohomology with A<sub>∞</sub> means.</p>
<p>So, we start. We recover, from the previous post, the structure of the
cohomology ring as <em>k[x,y,z]/(xy)</em>, with <em>x,y</em> in degree 1, and <em>z</em> in
degree 2. Furthermore, we have a higher operation, <em>m:sub:`4`</em>, with
<em>m:sub:`4`(x,y,x,y)=m:sub:`4`(y,x,y,x)=z</em>.</p>
<div class="line-block">
<div class="line">Thus, with the theorem, stating that the maps</div>
<div class="line-block">
<div class="line">[tex]m_n\colon(\operator{Ext}^1(S,S))^{\otimes
n}\to\operator{Ext}^2[/tex]</div>
<div class="line">are actually the duals of the maps</div>
<div class="line">[tex]i_n\colon R_n\to A_1^{\otimes n}[/tex]</div>
<div class="line">embedding the relators of the original algebra into the tensor
algebra over the generators.</div>
</div>
</div>
<div class="line-block">
<div class="line">So, for our case, we have the maps</div>
<div class="line-block">
<div class="line">[tex]m_2(x,x)=x^2[/tex]</div>
<div class="line">[tex]m_2(y,y)=y^2[/tex]</div>
<div class="line">[tex]m_2(x,y,x,y)=z[/tex]</div>
<div class="line">[tex]m_2(y,x,y,x)=z[/tex]</div>
<div class="line">and we look to dualizing them. Considering for a while what this all
means, and fixing notation, with <em>a,b</em> the generators, dual to <em>x,y</em>,
we end up with the maps</div>
<div class="line">[tex]i_2((x^2)^*)=a^2[/tex]</div>
<div class="line">[tex]i_2((y^2)^*)=b^2[/tex]</div>
<div class="line">[tex]i_4(z^*)=abab+baba[/tex]</div>
<div class="line">from which we may recover a presentation of the group algebra as</div>
<div class="line">[tex]k\langle a,b\rangle/\langle a^2,b^2,abab+baba\rangle[/tex]</div>
</div>
</div>
<p>As such, this is an unqualified success. We recover our original group
algebra presentation from the A<sub>∞</sub>-structure, and thus should be
able to do similarily as a test for completion of future calculations as
well. This, of course, needs to be proven before relied upon, but it
lends credence to my hopes.</p>
Announcing the Carnival of Mathematics2007-02-02T11:09:00+01:00Michitag:blog.mikael.johanssons.org,2007-02-02:announcing-the-carnival-of-mathematics.html<p>Alon Levy, over at <a class="reference external" href="http://abstractnonsense.wordpress.com/">Abstract
Nonsense</a> has just announced
the first issue of a brand new Blog Carnival: the <a class="reference external" href="http://abstractnonsense.wordpress.com/2007/01/31/carnival-of-mathematics/">Carnival of
Mathematics</a>.</p>
<p>Go take a look. Submit your own blog posts. And then check it out in a
week - the carnival is scheduled for the 9th.</p>
On religion2007-01-27T22:06:00+01:00Michitag:blog.mikael.johanssons.org,2007-01-27:on-religion.html<p>I decided on a whim to look in at the
<a class="reference external" href="http://dilbertblog.typepad.com">Dilbertblog</a>, where the <a class="reference external" href="http://dilbertblog.typepad.com/the_dilbert_blog/2007/01/irrational_athe.html">top
post</a>
at the moment has Scott Adams calling all atheists that discuss on the
net irrational, using a rather neat strawman carbon copy of most
discussions of faith between believers (i.e. mostly Christians) and
atheists he has seen on the web.</p>
<p>At the end of it comes a question: supposing it could be established
that there's a truly infinitesimal (in the physical sense, not the
mathematical) chance of a God existing - he quotes "one in a trillion"
as a model probability; would it then be fair of an atheist to claim the
non-existence to be proven?</p>
<p>Somehow, this illustrates clearly the thesis I occasionally pronounce
(when I'm feeling belligerent) that mathematics and theology have a lot
more in common than mathematics and physics. For a physicist, the quoted
probability of failure is a dream: most physical 'truths' are much
looser than that. However, for very many people, the choice between
self-identifying as an atheist and as an agnostic falls in the agnostic
camp, since Absolute Certainty can never be reached in such a question.
For many more, the Lack of Absolute Certainty takes on a handy
argumentative baseball bat shape for Believers, who use it to cast doubt
on their supposedly wholly rational argumentative opponents.</p>
<p>I'm a mathematician. I acknowledge that things may not be what I
perceive them to be. I realize that this may make me into an agnostic -
HOWEVER, selflabeling myself as agnostic doesn't change anything. I
still view the existence of a God, or several, of descriptions known to
man or unknown, as mainly irrelevant.</p>
<p>If you get something out of it - fine, that's great. I, however, don't,
and therefore view the hypothesis as irrelevant. I don't believe in a
God, and very much not in the (sorry, a) Christian God, since I find the
hypothesis preposterous, and of very little impact on my own life.</p>
<div class="line-block">
<div class="line">I was once held up on my way from somewhere to somewhere else by a
Mormon missionary, who in the cause of our discussion put forth the
thesis that humans are incapable of acting morally without the
presence of a Watchful and Vengeful God.</div>
<div class="line-block">
<div class="line">I, on the other hand, think that most people will want to live in a
society where most if not preferably all act according to some set of
common morals, and therefore will help impose morals on their
community. This regardless of whether a God is used as impositional
tool or not. It's a useful social construct for getting it your way,
but in no way a necessary thesis.</div>
</div>
</div>
<p>That would be my answer to Scott Adams - too long to fit in the already
too long answer thread.</p>
Retrospection 20062006-12-30T19:31:00+01:00Michitag:blog.mikael.johanssons.org,2006-12-30:retrospection-2006.html<p>Inspired by other bloggers on <a class="reference external" href="http://planet.haskell.org">Planet
Haskell</a>, I thought I'd just sit down and
write a retrospection post, reviewing the past year - primarily from
angles such as mathematics, computers and my generic life situation.</p>
<p>It divides neatly into two different sections: the months as a
commercial programmer and the months as PhD student and academic
careerist.</p>
<p>The year began still working for Teleca Systems, and with security
consulting for Stockholm-based firms and frequent trips back home.</p>
<p>Then as the year went on and my PhD applications grew more and more, I
started getting results. I got invited to Bonn for an interview with the
Homology and Homotopy graduate school program - which was in the end
turned down because I was more of a homological algebraist than a
topologist. And the week after that, I was invited to Jena for an
interview for a position doing PhD work on computational homological
algebra. The interview went well, the potential advisor was nice (and a
once-roleplaying gamer to sweeten the deal more) and I got the position
just a few days later.</p>
<p>While in Switzerland.</p>
<p>Skiing.</p>
<p>On a weeklong luxurious conference in operad theory at a skiing resort
in the western part of the swiss alps. Lots of skiing, fantastic food,
lots of REALLY COOL people (I'm looking at you, pozorvlak, among others)
and lots of fascinating mathematics.</p>
<p>As I returned there, it was time to wrap up my position with Teleca,
disengage from all my duties, document everything I had done, train
everyone else in my stuff, and get out of there. Preferably before the
summer term started. Oh, and get through the bureaucracy involved in
getting hired at a German university. It's a fascinating experience, I
can tell you. Especially when you get asked whether you were involved
with the Stasi during communist east Germany.</p>
<p>Freshly hired, I had a week to get my affairs in order, and then got
thrown on my first bunch of students. Teaching and reading up on
cooooool mathematics for several months, leaving a little bit for visits
to Sweden, and then hitting the vacation hard with a long relaxing visit
to Sweden, and a marriage proposal.</p>
<p>The summer also saw the Pirate Bay raid, my heightened interest in the
politics of privacy, and membership in the Pirate Party, the election,
and following the election results explicitly with attention to the
Pirate Party results.</p>
<p>After the vacation came conference summer, stretching August through
September, with between 2 and 4 conferences depending on how you count
them, and a little bit of research. At the end of this, a new term
started and with that a new direction for my PhD research. This was, at
the same time, the period when I got enamoured with Haskell, read up on
it, grew more and more interested, fascinated, captivated.</p>
<p>During the time in Jena, I got involved in a roleplaying troupe, in a
mathematical magazine (with two articles published by now!), in a group
organizing math camps (participating in one camp), and getting hold,
slowly, of a decently sized social circle.</p>
<p>And then the end of the year hit before I knew what happened. It ends
tomorrow - and I have things hanging in idle loops swarming my head like
angry bees and waiting to be channeled through my consciousness out on
paper and code, and get me running with my research.</p>
<p>That's my year in review. How was your?</p>
An A∞-structure on the cohomology of D82006-11-30T14:28:00+01:00Michitag:blog.mikael.johanssons.org,2006-11-30:an-a-structure-on-the-cohomology-of-d8.html<p>As a first unknown (kinda, sorta, it still falls under the category of
path algebra quotients treated by Keller) A<sub>∞</sub>-calculation, I
shall find the A<sub>∞</sub>-structure of [tex]H^*(D_8,\mathbb
F_2)[/tex].</p>
<div class="line-block">
<div class="line">To do this, I fix the group algebra</div>
<div class="line-block">
<div class="line">[tex]\Lambda=\mathbb F_2[a,b]/(a^2,b^2,abab+baba)[/tex]</div>
<div class="line">and the cohomology ring</div>
<div class="line">[tex]\Gamma=\mathbb F_2[x,y,z]/(xy)[/tex]</div>
<div class="line">with [tex]x,y\in\Gamma_1[/tex], [tex]z\in\Gamma_2[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">Furthermore, we pick a canonical nice resolution P, continuing the one
I <a class="reference external" href="http://blog.mikael.johanssons.org/archive/2006/06/more-group-cohomology/">calculated
previously</a>.
This has the i:th component Λ<sup>i+1</sup>, and the differentials
looking like</div>
<div class="line-block">
<div class="line">[tex]\begin{pmatrix}</div>
<div class="line">a&0&bab&0&0&0\\</div>
<div class="line">0&b&0&aba&0&0\\</div>
<div class="line">0&0&a&0&bab&0\\</div>
<div class="line">0&0&0&b&0&aba\\</div>
<div class="line">0&0&0&0&a&b</div>
<div class="line">\end{pmatrix}[/tex]</div>
<div class="line">for differentials starting in odd degree, and</div>
<div class="line">[tex]\begin{pmatrix}</div>
<div class="line">a&0&bab&0&0\\</div>
<div class="line">0&b&0&aba&0\\</div>
<div class="line">0&0&a&0&bab\\</div>
<div class="line">0&0&0&b&aba</div>
<div class="line">\end{pmatrix}[/tex]</div>
<div class="line">for differentials starting in even degree. The first few you can see
on the previous calculation, or if you don't want to bother, they are</div>
<div class="line">[tex]\partial_0=\begin{pmatrix}a&b\end{pmatrix}[/tex]</div>
<div class="line">[tex]\partial_1=\begin{pmatrix}a&0&bab\\0&b&aba\end{pmatrix}[/tex]</div>
<div class="line">[tex]\partial_2=\begin{pmatrix}a&0&bab&0\\0&b&0&aba\\0&0&a&b\end{pmatrix}[/tex]</div>
</div>
</div>
<p>Now, armed with this, we can get cracking. By lifting, we get canonical
representating chain maps for x,y,z described, loosly, by the following:</p>
<div class="line-block">
<div class="line">x takes an element in [tex]\Lambda^{2k}[/tex], keeps the first,
third, et.c. elements and throws out the even ordered elements; so
[tex]x\cdot(a_1,a_2,a_3,a_4,a_5,a_6)=(a_1,0,a_3,0,a_5)[/tex]</div>
<div class="line-block">
<div class="line">For an element in [tex]\Lambda^{2k+1}[/tex], the last element gets
extra treatment, so</div>
<div class="line">[tex]x\cdot(a_1,a_2,a_3,a_4,a_5,a_6,a_7)=(a_1,0,a_3,0,a_5,ab\cdot
a_7)[/tex]</div>
<div class="line">For the lowest degrees, we also have</div>
<div class="line">[tex]x_0 = \begin{pmatrix}1&0\end{pmatrix}[/tex]</div>
<div class="line">[tex]x_1 = \begin{pmatrix}1&0&0\\0&0&ab\end{pmatrix}[/tex]</div>
<div class="line">[tex]x_2 = \begin{pmatrix}1&0&0&0\\0&0&1&0\end{pmatrix}[/tex]</div>
</div>
</div>
<p>This describes the image [tex]f_1(x)[/tex] of our quasi-isomorphism
[tex]\Gamma\to\Hom_\Lambda(P,P)[/tex].</p>
<div class="line-block">
<div class="line">For [tex]f_1(y)[/tex], we get the interleaved effect: every second
element, but now with even indices, and some extra treatments at the
end. So an element in [tex]\Lambda^{2i}[/tex] will behave like</div>
<div class="line">[tex]y\cdot(a_1,a_2,a_3,a_4,a_5,a_6)=(0,a_2,0,a_4,a_6)[/tex]</div>
<div class="line-block">
<div class="line">and for an element in [tex]\Lambda^{2i+1}[/tex] we get</div>
<div class="line">[tex]y\cdot(a_1,a_2,a_3,a_4,a_5,a_6,a_7)=(0,a_2,0,a_4,ba\cdot
a_7,a_6)[/tex]</div>
</div>
</div>
<p>The last generator, z, is rather boring. The corresponding chain map
only lifts elements to other degrees: shaving off the first two
components of whatever it is applied to.</p>
<p>We define [tex]f_1[/tex] on products of the generators by simply
composing the corresponding chain maps as long as the product is
defined. The interesting stuff, from an A<sub>∞</sub> point of view occurs
when the product vanishes, thus for x,y in the first line. A calculation
shows us that xy is the only interesting element of the ideal
[tex](xy)[/tex], since [tex]f_1(x)f_1(x)f_1(y)f_1(y)=0[/tex], and we
define [tex]f_2(x^2,y)=f_1(x)f_2(x,y)[/tex] and
[tex]f_2(x,y^2)=f_2(x,y)f_1(y)[/tex].</p>
<p>Thus, we'd be curious as to what happens with xy, and yx. Both products
are zero in the cohomology ring; but the composition of the
corresponding chain maps are not zero.</p>
<div class="line-block">
<div class="line">By calculation, we get [tex]f_1(x)f_1(y)[/tex] given in the first
few step by the matrices</div>
<div class="line-block">
<div class="line">[tex]\begin{pmatrix}0&0&ba\end{pmatrix}[/tex]</div>
<div class="line">[tex]\begin{pmatrix}0&0&0&0\\0&0&0&ab\end{pmatrix}[/tex]</div>
<div class="line">and so on, with the lower right entry alternatingly ab and ba.</div>
</div>
</div>
<p>[tex]f_1(y)f_1(x)[/tex] is the same thing, but with ab and ba
interchanged.</p>
<p>We want [tex]f_2(x,y)[/tex] and [tex]f_2(y,x)[/tex] to be homotopies
between the 0 chain maps, and these two respectively. By juggling the
relevant matrices in Magma for a while, I conclude that
[tex]f_2(x,y)[/tex] has lower right entry a for all odd degree map
components, and [tex]f_2(y,x)[/tex] has the entry above that b, same
components. All even components vanish.</p>
<div class="line-block">
<div class="line">Thus, for the first nonzero component, we have</div>
<div class="line-block">
<div class="line">[tex]\begin{pmatrix}0&0&0\\0&0&a\end{pmatrix}[/tex]</div>
<div class="line">for x,y and</div>
<div class="line">[tex]\begin{pmatrix}0&0&b\\0&0&0\end{pmatrix}[/tex]</div>
<div class="line">for y,x.</div>
</div>
</div>
<div class="line-block">
<div class="line">Now, if we look at Φ<sub>3</sub>, most of the possibilities vanish
because of the way we defined f<sub>2</sub> for the non xy, yx cases.
Thus, the only interesting entries remaining are xyx and yxy. These
are, respectively,</div>
<div class="line-block">
<div class="line">[tex]\Phi_3(x,y,x)=f_1(x)f_2(y,x)+f_2(x,y)f_1(x)[/tex]</div>
<div class="line">[tex]\Phi_3(y,x,y)=f_1(y)f_2(x,y)+f_2(y,x)f_1(y)[/tex]</div>
<div class="line">and calculation of these expressions is a matter of composing the
chain maps we have a tentative grasp of already. This gives us the
case of xyx alternatingly between b in lower right corner for all even
degrees and a just to the left of it for odd degrees, thus starting
with the matrices</div>
<div class="line">[tex]\begin{pmatrix}0&0&b\end{pmatrix}[/tex]</div>
<div class="line">[tex]\begin{pmatrix}0&0&0\\0&a&0\end{pmatrix}[/tex]</div>
<div class="line">and for the case yxy, we have a similar pattern, but with a in the
lower right for odd degrees and b above it for even degrees, giving
the first two maps as</div>
<div class="line">[tex]\begin{pmatrix}0&0&a\end{pmatrix}[/tex]</div>
<div class="line">[tex]\begin{pmatrix}0&0&b\\0&0&0\end{pmatrix}[/tex]</div>
</div>
</div>
<p>So, we can immediately conclude that our m<sub>3</sub> is going to be the
zero element of Γ, since none of these chain maps are homotopic to any
non-trivial coclass representatives. Thus we'll need to find homotopies
from these to the zero chain maps for our values of f<sub>3</sub>.</p>
<div class="line-block">
<div class="line">A homotopy suitable for f<sub>3</sub>(x,y,x) would be h with</div>
<div class="line-block">
<div class="line">[tex]h_0=\begin{pmatrix}0&0\end{pmatrix}[/tex]</div>
<div class="line">[tex]h_1=\begin{pmatrix}0&0&0\\0&0&1\end{pmatrix}[/tex]</div>
<div class="line">[tex]h_2=\begin{pmatrix}0&0&0&0\\0&0&0&1\\0&0&0&0\end{pmatrix}[/tex]</div>
<div class="line">[tex]h_3=\begin{pmatrix}0&0&0&0&0\\0&0&0&1&0\\0&0&0&0&0\\0&0&0&0&1\end{pmatrix}[/tex]</div>
<div class="line">[tex]h_4=\begin{pmatrix}0&0&0&0&0&0\\0&0&0&1&0&0\\0&0&0&0&0&0\\0&0&0&0&0&1\\0&0&0&0&0&0\end{pmatrix}[/tex]</div>
<div class="line">It goes on, but I have not yet discerned a pattern clear enough to
describe a generic element of this chain map. I probably should, at
some point.</div>
</div>
</div>
<div class="line-block">
<div class="line">For f<sub>3</sub>(y,x,y), we would need a homotopy for the other
sequence, and calculations lead me to put down h with</div>
<div class="line-block">
<div class="line">[tex]h_0=\begin{pmatrix}0&0\end{pmatrix}[/tex]</div>
<div class="line">[tex]h_1=\begin{pmatrix}0&0&1\\0&0&0\end{pmatrix}[/tex]</div>
<div class="line">[tex]h_2=\begin{pmatrix}0&0&1&0\\0&0&0&0\\0&0&0&0\end{pmatrix}[/tex]</div>
<div class="line">[tex]h_3=\begin{pmatrix}0&0&1&0&0\\0&0&0&0&0\\0&0&0&0&1\\0&0&0&0&0\end{pmatrix}[/tex]</div>
<div class="line">[tex]h_4=\begin{pmatrix}0&0&1&0&0&0\\0&0&0&0&0&0\\0&0&0&0&1&0\\0&0&0&0&0&0\\0&0&0&0&0&0\end{pmatrix}[/tex]</div>
<div class="line">and here we can discern a pattern to the matrices. They will have a
sequence of 1 starting out at the third column, and going down
skipping every second place.</div>
</div>
</div>
<p>Armed with these calculations, we may set out to calculate for our
pleasure Φ<sub>4</sub>. Due to the heuristic we use in defining the
f<sub>i</sub> for things "lifted" from the generators we've defined above,
we shall discard anything except for xyxy and yxyx from study; all other
cases will just give the relations we use in calculating f<sub>2</sub> or
f<sub>3</sub> from the cases we've calculated.</p>
<div class="line-block">
<div class="line">Thus, we'll be interested in</div>
<div class="line">[tex]\Phi_4(x,y,x,y)=f_1(x)f_3(y,x,y)+f_2(x,y)f_2(x,y)+f_3(x,y,x)f_1(y)[/tex]</div>
<div class="line-block">
<div class="line">and</div>
<div class="line">[tex]\Phi_4(y,x,y,x)=f_1(y)f_3(x,y,x)+f_2(y,x)f_2(y,x)+f_3(y,x,y)f_1(x)[/tex]</div>
<div class="line">and thus we can, by using the already calculated matrices and a CAS
(I use Magma right now) just calculate the matrices. In both these
cases we get the same thing: the matrices</div>
<div class="line">[tex]\begin{pmatrix}0&0&1\end{pmatrix}[/tex]</div>
<div class="line">[tex]\begin{pmatrix}0&0&1&0\\0&0&0&1\end{pmatrix}[/tex]</div>
<div class="line">[tex]\begin{pmatrix}0&0&1&0&0\\0&0&0&1&0\\0&0&0&0&1\end{pmatrix}[/tex]</div>
<div class="line">and so on; which are precisely the matrices we chose for our
[tex]f_1(z)[/tex].</div>
</div>
</div>
<p>Thus, we'll set m<sub>4</sub>(x,y,x,y)=m:sub:<cite>4</cite>(y,x,y,x)=z and
f<sub>4</sub>=0, and stop calculating right here.</p>
A∞-algebras and group cohomology2006-11-23T15:48:00+01:00Michitag:blog.mikael.johanssons.org,2006-11-23:a-algebras-and-group-cohomology.html<p>In which the author, after a long session sweating blood with his
advisor, manages to calculate the A<sub>∞</sub>-structures on the
cohomology algebras [tex]H^*(C_4,\mathbb F_2)[/tex] and
[tex]H^*(C_2\times C_2,\mathbb F_2)[/tex].</p>
<p>We will find the A<sub>∞</sub>-structures on the group cohomology ring by
establishing an A<sub>∞</sub>-quasi-isomorphism to the endomorphism
dg-algebra of a resolution of the base field. We'll write m<sub>i</sub>
for operations on the group cohomology, and μ<sub>i</sub> for operations
on the endomorphism dg-algebra. The endomorphism dg-algebra has
μ<sub>1</sub>=d and μ<sub>2</sub>=composition of maps, and all higher
operations vanishing, in all our cases.</p>
<div class="section" id="elementary-abelian-2-group">
<h2>Elementary abelian 2-group</h2>
<p>Let's start with the easy case. Following to a certain the notation used
in Dag Madsen's PhD thesis appendix (the Canonical Source of the
A<sub>∞</sub>-structures of cyclic group cohomology algebras), and the
recipe given in <a class="reference external" href="http://citeseer.ist.psu.edu/keller01infinity.html">A-infinity algebras in representation
theory</a>, we may
start by stating what we know as we start:</p>
<div class="line-block">
<div class="line">[tex]\Lambda = \mathbb F_2(C_2\times C_2) = \mathbb
F_2[a,b]/(a^2,b^2)[/tex] our group algebra. We can resolve
[tex]\mathbb F_2[/tex] using a neat canonical resolution, derived
from the tensor products of the resolutions of the field with modules
over the cyclic 2-group. This gives us the resolution</div>
<div class="line-block">
<div class="line">[tex]</div>
<div class="line">\begin{diagram}</div>
<div class="line">P : & \dots & \rTo & \Lambda^3 & \rTo & \Lambda^2 & \rTo &
\Lambda^1 & \rTo & \mathbb F_2 & \rTo & 0</div>
<div class="line">\end{diagram}</div>
<div class="line">[/tex]</div>
<div class="line">where each differential is given by a matrix D with D<sub>i,i</sub>=a
and D<sub>i+1,i</sub>=b.</div>
</div>
</div>
<p>Recall that [tex]H^*(G,k)=H^*(\Hom_{kG}(P,P))[/tex]. Thus
[tex]\Hom_\Lambda(P,P)[/tex] is a dg-algebra, whose homology is
precisely the group cohomology.</p>
<p>Now, by the minimality theorem (proven by Kadeishvili first, and
reproven by a veritable host of mathematicians), there is a
quasi-isomorphism of A<sub>∞</sub>-algebras [tex]H^*A \to A[/tex] that
lifts the identity in homology. This we can use to figure out the
A<sub>∞</sub>-structure for our cohomology ring: we know that the
[tex]\Hom_\Lambda(P,P)[/tex] is an honest-to-glod dg-algebra, and
thus has an A<sub>∞</sub>-structure with all higher multiplications (by
which I mean 3-ary and higher) vanishing. We can also pick
representatives for our cohomology ring elements as representatives of
homotopy classes of chain maps [tex]P_{.}\to P_{.}[/tex]. This gives
us a quasi-isomorphism of dg-algebras [tex]H^*A\to A[/tex], lifting
the identity, and which we can augment to an
A<sub>∞</sub>-quasi-isomorphism.</p>
<p>Which is what we'll want to do now.</p>
<p>We'll (at this stage) use the fact that we know what
[tex]H^*(C_2\times C_2,\mathbb F_2)[/tex] looks like: it's the
algebra [tex]\mathbb F_2[x,y][/tex]. There are two 1-coclasses, both
represented by a morphism [tex]\Lambda^2\to\mathbb F_2[/tex], namely
one composing the projection onto the first factor with the augmentation
map, and one composing the projection onto the second factor. We'll name
the first of these x, and the second y, and note that they lift to chain
maps [tex]P_{.}\to P_{.}[/tex] that shave off the first and last
summand of [tex]\Lambda^i[/tex] respectively in each degree.</p>
<div class="line-block">
<div class="line">So for our quasi-isomorphism [tex]f_\dot[/tex], we now have
[tex]f_1[/tex] sending each [tex]x^iy^j[/tex] to the chain map
shaving off the first i and last j components. We also know
[tex]m_1,m_2,\mu_1,\mu_2[/tex], namely 0, multiplication,
differential and composition respectively. The first axiom we'll
investigate for A<sub>∞</sub>-maps states that</div>
<div class="line-block">
<div class="line">[tex]</div>
<div class="line">f_1(m_1\otimes
m_1)+f_1(m_2)+f_2(m_1)+\mu_1(f_2)+\mu_2(f_1\otimes
f_1)=0[/tex]</div>
<div class="line">But now m<sub>1</sub>=0 and so this reduces to</div>
<div class="line">[tex]f_1(m_2)+\mu_1(f_2)+\mu_2(f_1\otimes f_1)=0[/tex]</div>
<div class="line">so f<sub>2</sub> is a map such that its differential is equal to the
"commutator" of f<sub>1</sub> and multiplication.</div>
</div>
</div>
<p>Now, pick some coclasses u,v. These will map to shaving maps as
described above, and their product will map to a shaving map that does
just the same as the composition of the individual shaving maps; so if u
is x<sup>i</sup>y<sup>j</sup>, and v is x<sup>k</sup>y<sup>l</sup>, then
f<sub>1</sub>(u) shaves off i components in the front and j components
in the back, and f<sub>1</sub>(v) shaves off k in the front and l in the
back. So, the composition of these two maps is the map that drops i+k
components from the front, and j+l components from the back. On the
other hand f<sub>1</sub>(uv)=f:sub:<cite>1</cite>(x:sup:<cite>i+k</cite>y<sup>j+l</sup>) is
the map that drops i+k components from the front, and j+l components
from the back.</p>
<p>So they are the same. And thus we don't need to bother with any
homotopies, or any higher order operations or higher order maps. We set
f<sup>2</sup> to be the zero map, and consider ourself finished and happy.
The cohomology is a dg-algebra in its own right, and this is all there
is to it in A<sub>∞</sub>-terms. And we're done.</p>
<p>This result implies, by the way, via a proposition from Keller, that the
elementary abelian 2-groups have Koszul group algebras. An argument
using restrictions of non-nilpotent coclasses to cyclic subgroups will
tell you that these are the only finite groups that have Koszul group
algebras.</p>
<p>With this example.</p>
</div>
<div class="section" id="cyclic-4-group">
<h2>Cyclic 4-group</h2>
<p>This is the one canonical example known beforehand in group cohomology.
It was calculated by Dag Madsen in his PhD-thesis, and cited ever since.
I will perform the same calculation, but in a blinding detail you won't
find in a thesis or a paper on the subject.</p>
<div class="line-block">
<div class="line">So, for starters, we find ourself with the group algebra</div>
<div class="line-block">
<div class="line">[tex]\Lambda=\mathbb F_2C_4=\mathbb F_2[x]/(x^4)[/tex]</div>
<div class="line">and the cohomology ring</div>
<div class="line">[tex]\Gamma=\mathbb F_2[\xi,\eta]/(\xi^2)[/tex]</div>
<div class="line">quasi-isomorphic to the endomorphism dg-algebra of a neat resolution</div>
<div class="line">[tex]</div>
<div class="line">P:&\dots&\Lambda&\rTo^{x}&\Lambda&\rTo^{x^3}&\Lambda&\rTo^{x}&\mathbb
F_2&\rTo&0[/tex]</div>
<div class="line">where c in Γ represent the morphism m→cm.</div>
<div class="line">Furthermore, as usual, the μ<sub>i</sub> are all known, and
m<sub>1</sub>=0, and m<sub>2</sub> is the multiplication in Γ.</div>
</div>
</div>
<p>Once we've choosen representatives for the coclasses in
[tex]\Hom_\Lambda(P,P)[/tex], we can lift this choice to an
A<sub>∞</sub>-quasi-isomorphism. In this process, we'll find and define
the relevant higher multiplications for Γ, thus finding the
A<sub>∞</sub>-structure for that algebra.</p>
<div class="line-block">
<div class="line">Thus f<sub>1</sub> sends</div>
<div class="line-block">
<div class="line">[tex]1\mapsto\begin{diagram}</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&&\dEq&&\dEq&&\dEq\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">\end{diagram}[/tex]</div>
<div class="line">and</div>
<div class="line">[tex]\xi\mapsto\begin{diagram}</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{x^2}&&\rdTo^{1}&&\rdTo^{x^2}&&\rdTo^{1}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">\end{diagram}[/tex]</div>
<div class="line">and</div>
<div class="line">[tex]\eta\mapsto\begin{diagram}</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo(4,2)^{1}&&\rdTo(4,2)^{1}&&\rdTo(4,2)^{1}&&\\</div>
<div class="line">\cdots&\rTo_x&\Lambda&\rTo_{x^3}&\Lambda&\rTo_x&\Lambda&\rTo_{x^3}&\cdots\\</div>
<div class="line">\end{diagram}[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">Working through the coherence axioms, the first we encounter is the
one defining [tex]f_2[/tex]. This is the one investigating the
relationship between</div>
<div class="line-block">
<div class="line">[tex]m_2(f_1\otimes f_1)[/tex] and [tex]f_1(m_2)[/tex]. So, we
pick [tex]u,v\in H^*(C_8,\mathbb F_2)[/tex], and investigate the
two expressions [tex]f_1(u)f_1(v)[/tex] and [tex]f_1(uv)[/tex].
Writing it down in detail tells us that the only point where a
difference occurs is if both u and v are odd, and this case we can
figure out from the case where u=v=ξ since η only translates chainmaps
higher up in degree.</div>
</div>
</div>
<div class="line-block">
<div class="line">Now, [tex]\xi^2=0[/tex], so [tex]f_1(\xi^2)=0[/tex].</div>
<div class="line-block">
<div class="line">And [tex]f_1(\xi)f_1(\xi)[/tex] we can read off the following
diagram</div>
<div class="line">[tex]\begin{diagram}</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{x^2}&&\rdTo^{1}&&\rdTo^{x^2}&&\rdTo^{1}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{x^2}&&\rdTo^{1}&&\rdTo^{x^2}&&\rdTo^{1}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">\end{diagram}[/tex]</div>
<div class="line">and thus we can conclude that [tex]f_1(\xi)f_1(\xi)=\cdot
x^2[/tex] in degree 2. We shall be juggling maps a lot, so I will use
the notation (x y z w)[i] for the map in
[tex]\Hom_\Lambda(P,P)[/tex] that drops i degrees, and where the
four last positions are multiplication by x, y, z and w respectively.
So, with this notation, we have</div>
<div class="line">f<sub>1</sub>(η)=(1 1 1 1)[2]</div>
<div class="line">f<sub>1</sub>(ξ)=(1 x<sup>2</sup> 1 x<sup>2</sup>)[1]</div>
<div class="line">f<sub>1</sub>(ξ)f:sub:<cite>1</cite>(ξ) = (x:sup:<cite>2</cite> x<sup>2</sup> x<sup>2</sup>
x<sup>2</sup>)[2]</div>
</div>
</div>
<p>Composing the lowest degree component of the map (x:sup:<cite>2</cite> x<sup>2</sup>
x<sup>2</sup> x<sup>2</sup>)[2] with the augmentation map, we see that in
cohomology, it corresponds to the 0 element. So it is actually homotopic
to the image of [tex]\xi^2[/tex] under f<sub>1</sub>, and this particular
homotopy is what we'll want [tex]f_2(\xi,\xi)[/tex] to be.</p>
<div class="line-block">
<div class="line">So we'll want a homotopy h, defined by that</div>
<div class="line-block">
<div class="line">dh+hd = (x:sup:<cite>2</cite> x<sup>2</sup> x<sup>2</sup> x<sup>2</sup>)[2]</div>
<div class="line">so we can immediately conclude that h needs to be of degree 1, since
composition with d will add another degree step. So our h will look
something like</div>
<div class="line">[tex]\begin{diagram}</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{h}&&\rdTo^{h}&&\rdTo^{h}&&\rdTo^{h}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots</div>
<div class="line">\end{diagram}[/tex]</div>
<div class="line">and we can check what happens as we chase through the diagram. If we
start at an odd-indexed position, we'll get the two components</div>
<div class="line">hd(o) = h(x o) = x h(o)</div>
<div class="line">dh(o) = x<sup>3</sup> h(o)</div>
<div class="line">for o of odd degree. If we instead start in an even degree, we get</div>
<div class="line">hd(e) = h(x<sup>3</sup>e) = x<sup>3</sup>h(e)</div>
<div class="line">dh(e) = x h(e)</div>
<div class="line">where e is an element of even degree. By close inspection in the
diagram, we note that the expressions involving x<sup>3</sup> all
involve h applied on elements of odd degree, and so we'll set those to
vanish, and fill in the needed values by letting h(e)=x. Thus we get
the chain map</div>
<div class="line">h = (x 0 x 0) [1]</div>
<div class="line">or in diagrammatic form</div>
<div class="line">[tex]\begin{diagram}</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{0}&&\rdTo^{x}&&\rdTo^{0}&&\rdTo^{x}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots</div>
<div class="line">\end{diagram}[/tex]</div>
</div>
</div>
<p>Thus [tex]f_2(\xi,\xi)=(x\; 0\; x\; 0)[1][/tex], and f<sub>2</sub>
on odd and odd elements are all translates of this, and all other
parameters to f<sub>2</sub> give us a zero map. This brings us to a point
where we can start investigating m<sub>3</sub>.</p>
<p>We can extract f<sub>1</sub>m<sub>3</sub> and m<sub>1</sub>f<sub>3</sub>
from the 3rd A<sub>∞</sub>-morphism axiom, and put the rest into a map of
its own. This will end up to be something supposed to be homotopic to
the image of m<sub>3</sub>, and so we can define m<sub>3</sub> and the
homotopy once we have them.</p>
<div class="line-block">
<div class="line">The rest of the axiom is</div>
<div class="line-block">
<div class="line">[tex]</div>
<div class="line">\Phi_3=m_2(f_1\otimes f_2+f_2\otimes f_1)+f_2(1\otimes
m_2+m_2\otimes 1)[/tex]</div>
<div class="line">(where I am using the fact that we're in characteristic 2
extensively)</div>
<div class="line">Applying this on elements x,y,z gives us a possibility to distinguish
between cases.</div>
</div>
</div>
<p>If only one element is of odd degree, then every f<sub>2</sub> occurring
will have at least one even argument, and so will vanish.</p>
<div class="line-block">
<div class="line">If two elements are of odd degree, we get the expressions</div>
<div class="line-block">
<div class="line">[tex]f_2(x,y)f_1(z)+f_2(x,yz)[/tex] for x,y odd</div>
<div class="line">[tex]f_2(x,yz)+f_2(xy,z)[/tex] for x,z odd</div>
<div class="line">[tex]f_1(x)f_2(y,z)+f_2(xy,z)[/tex] for y,z odd.</div>
<div class="line">Each of these vanish if we take the behaviour of f<sub>2</sub> for
higher odd coclasses into account: these are just translates of the
behaviour defined in f<sub>2</sub>(ξ,ξ), and so should vanish, since
we defined f<sub>2</sub> on pairs of odd classes to just be translates
of the value on ξ and ξ.</div>
</div>
</div>
<p>Remains the case with all three elements of odd degree. Again, higher
odd elements behave by translating the behaviour of ξ, and so it is
enough to study the behaviour on ξ, ξ, ξ.</p>
<div class="line-block">
<div class="line">In this case, we get</div>
<div class="line">[tex]\Phi_3(\xi,\xi,\xi)=f_1(\xi)f_2(\xi,\xi)+f_2(\xi,\xi)f_1(\xi)[/tex]</div>
<div class="line-block">
<div class="line">which is the sum of the maps given in the following two diagrams</div>
<div class="line">[tex]\begin{diagram}</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{0}&&\rdTo^{x}&&\rdTo^{0}&&\rdTo^{x}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{x^2}&&\rdTo^{1}&&\rdTo^{x^2}&&\rdTo^{1}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">\end{diagram}[/tex]</div>
<div class="line">and</div>
<div class="line">[tex]\begin{diagram}</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{x^2}&&\rdTo^{1}&&\rdTo^{x^2}&&\rdTo^{1}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{0}&&\rdTo^{x}&&\rdTo^{0}&&\rdTo^{x}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">\end{diagram}[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">By reading off the diagrams, we note that this sum is the chain map</div>
<div class="line-block">
<div class="line">(x:sup:<cite>3</cite> x<sup>3</sup> x<sup>3</sup> x<sup>3</sup>)[2]</div>
<div class="line">and we can further note that this corresponds to the cocycle 0 of
degree 2 (since the augmentation composed with x<sup>3</sup> vanishes),
so we put m<sub>3</sub>=0, to correspond to what this is homotopic to.
And f<sub>3</sub>(ξ,ξ,ξ) needs to be mapped precisely to this
homotopy.</div>
</div>
</div>
<div class="line-block">
<div class="line">So, a homotopy h between 0 and (x:sup:<cite>3</cite> x<sup>3</sup> x<sup>3</sup>
x<sup>3</sup>)[2] is a map h with</div>
<div class="line-block">
<div class="line">dh+hd = (x:sup:<cite>3</cite> x<sup>3</sup> x<sup>3</sup> x<sup>3</sup>)[2]</div>
<div class="line">From the same considerations as above, we can conclude that h will
have the two components</div>
<div class="line">hd(o) = h(x o) = x h(o)</div>
<div class="line">dh(o) = x<sup>3</sup> h(o)</div>
<div class="line">for o of odd degree. If we instead start in an even degree, we get</div>
<div class="line">hd(e) = h(x<sup>3</sup>e) = x<sup>3</sup>h(e)</div>
<div class="line">dh(e) = x h(e)</div>
<div class="line">where e is an element of even degree.</div>
</div>
</div>
<div class="line-block">
<div class="line">Now, this time, since we want the x<sup>3</sup> to occur, we can pick h
to be 0 on even degree components and the identity on odd degree
components, giving us</div>
<div class="line-block">
<div class="line">h = (0 1 0 1)[1]</div>
<div class="line">and this being the image of f<sub>3</sub>(ξ,ξ,ξ).</div>
</div>
</div>
<div class="line-block">
<div class="line">Going ever onwards, the next axiom we check is the one, that when we
leave m<sub>1</sub>f<sub>4</sub> and f<sub>1</sub>m<sub>4</sub> out of the
mix, we'll get</div>
<div class="line-block">
<div class="line">[tex]\Phi_4=m_2(f_1\otimes f_3+f_2\otimes f_2+f_3\otimes
f_1) + f_2(1\otimes m_3+m_3\otimes 1) \\ +
f_3(1\otimes1\otimes m_2+1\otimes
m_2\otimes1+m_2\otimes1\otimes1)[/tex]</div>
<div class="line">and again, we can start looking at cases based on number of odd
arguments.</div>
</div>
</div>
<p>For only one odd argument, all the f<sub>2</sub> and f<sub>3</sub> will
vanish.</p>
<p>For two odd arguments, the same thing will happen.</p>
<div class="line-block">
<div class="line">For three odd arguments, we get the expressions</div>
<div class="line-block">
<div class="line">[tex]f_1(x)f_3(y,z,w)+f_3(xy,z,w)[/tex] for x even</div>
<div class="line">[tex]f_3(xy,z,w)+f_3(x,yz,w)[/tex] for y even</div>
<div class="line">[tex]f_3(x,yz,w)+f_3(x,y,zw)[/tex] for z even</div>
<div class="line">[tex]f_3(x,y,z)f_1(w)+f_3(x,y,zw)[/tex] for w even</div>
<div class="line">where the expressions vanish since all summands in each are the same
translations of [tex]f_3(\xi,\xi,\xi)[/tex].</div>
</div>
</div>
<div class="line-block">
<div class="line">And for [tex]\Phi_4(\xi,\xi,\xi,\xi)[/tex] we get</div>
<div class="line">[tex]f_1(\xi)f_3(\xi,\xi,\xi)+f_2(\xi,\xi)f_2(\xi,\xi)+f_3(\xi,\xi,\xi)f_1(\xi)[/tex]</div>
</div>
<div class="line-block">
<div class="line">Again, we calculate each summand by using a corresponding diagram, and
get the three diagrams</div>
<div class="line-block">
<div class="line">[tex]\begin{diagram}</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{1}&&\rdTo^{0}&&\rdTo^{1}&&\rdTo^{0}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{x^2}&&\rdTo^{1}&&\rdTo^{x^2}&&\rdTo^{1}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">\end{diagram}[/tex]</div>
<div class="line">for f<sub>1</sub>f<sub>3</sub> and</div>
<div class="line">[tex]\begin{diagram}</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{x^2}&&\rdTo^{1}&&\rdTo^{x^2}&&\rdTo^{1}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{1}&&\rdTo^{0}&&\rdTo^{1}&&\rdTo^{0}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">\end{diagram}[/tex]</div>
<div class="line">for f<sub>3</sub>f<sub>1</sub> and</div>
<div class="line">[tex]\begin{diagram}</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{0}&&\rdTo^{x}&&\rdTo^{0}&&\rdTo^{x}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">&\rdTo^{0}&&\rdTo^{x}&&\rdTo^{0}&&\rdTo^{x}\\</div>
<div class="line">\cdots&\rTo^x&\Lambda&\rTo^{x^3}&\Lambda&\rTo^x&\Lambda&\rTo^{x^3}&\cdots\\</div>
<div class="line">\end{diagram}[/tex]</div>
<div class="line">for f<sub>2</sub>f<sub>2</sub></div>
</div>
</div>
<div class="line-block">
<div class="line">Thus we can read off that f<sub>2</sub>f<sub>2</sub> vanishes, and that
the complete expression for [tex]\Phi_4(\xi,\xi,\xi,\xi)[/tex]
is the one we'd write down as</div>
<div class="line-block">
<div class="line">(1 1 1 1)[2]</div>
<div class="line">which we recognize as [tex]f_1(\eta)[/tex], so we set
[tex]m_4(\xi,\xi,\xi,\xi)=\eta[/tex] and [tex]f_4=f_i=0[/tex].</div>
</div>
</div>
<p>And this concludes the calculation of the A<sub>∞</sub>-structure on
[tex]H^*(C_4,\mathbb F_2)[/tex], and also gives a rather clear hint
as to how to do it for [tex]H^*(C_{2^i},\mathbb F_2)[/tex] in
general.</p>
</div>
Why Blog?2006-11-16T16:38:00+01:00Michitag:blog.mikael.johanssons.org,2006-11-16:why-blog.html<p>The Community College Dean has written about <a class="reference external" href="http://suburbdad.blogspot.com/2006/11/why-blog.html">why he
blogs</a>, and asks
any and all readers to tack on to his effort.</p>
<p>My blog is not very anonymous. It is occasionally personal, occasionally
political and throughout a venting location for thoughts, and a place
where I formulate myself in higher detail - so to speak a scratchpad,
but public enough for me to allow others to read it.</p>
<p>I write it to formulate my own thoughts further, find possible errors,
start discussions, or just jot down the viewpoints that illuminated some
point of some argument for me. I do it in public because I thouroughly
enjoy the conversations it sparks.</p>
<p>I can see the charm of anonymity, but I am way too much of an attention
junkie to be able to stay anonymous for very long. This has come up in
my career - thoughts about job applications in process, published on
this blog, and even more on my livejournal, sparked questions at the
hiring interview with the potential (and now current) employer about my
writings and what influence the there formulated ideas would have on my
actual poise toward my current job.</p>
A∞ for the layman2006-11-07T16:18:00+01:00Michitag:blog.mikael.johanssons.org,2006-11-07:a-for-the-layman.html<p>I recently had reason to describe my PhD work in complete laymans terms
while writing letters to my grandparents. This being a good thing to do
in order to digest your ideas properly, I thought I might try and write
it up here as well.</p>
<p>It will, however, push through some 100-odd years of mathematical
development rather swiftly. Try to keep up - I will keep it as light as
I can while not losing what I want to say.</p>
<div class="section" id="algebra">
<h2>Algebra</h2>
<p>In the 19th century, a number of different mathematical efforts ended up
using more or less precisely the same structures, though not really
recognizing that they were the same. This recognition came with Cayley,
who first brought the first abstract definition of a group.</p>
<p>A group is a set G of elements, with a binary operation *, such that
the following relations hold:</p>
<ol class="arabic simple">
<li>a*(b*c) = (a*b)*c for any a,b,c in G (associativity)</li>
<li>There is an element e such that for any a in G e*a=a*e=a (identity
element)</li>
<li>For any a in G, there is an element a' in G such that a'*a=a*a'=e
(inverses)</li>
</ol>
<p>This turns out to be just what you needed to merge the studies of
symmetries in geometric objects with the study of solvability of
polynomial equations, and a number of other various ideas that were
floating around. I will put this all on hold for a while, and then
return to it at the end of this rant.</p>
</div>
<div class="section" id="topology">
<h2>Topology</h2>
<p>At the end of the 19th century and beginning of the 20th century, Henri
Poincar</p>
</div>
Prototyping thought2006-10-28T21:43:00+02:00Michitag:blog.mikael.johanssons.org,2006-10-28:prototyping-thought.html<p>In a recent <a class="reference external" href="http://pozorvlak.livejournal.com/30994.html">post</a>,
<a class="reference external" href="http://pozorvlak.livejournal.com">pozorvlak</a> reminded me of one of
the reason it is important to have a good, obvious, and quick-to-write
programming language around.</p>
<p>He, as I, is a mathematician, spending his time thinking, finding
patterns, and trying to formulate (more or less) absolute proof that his
patterns hold all the time, alternatively ways to demonstrate that they
may not be universal.</p>
<p>In the post linked above, he starts by a neat little exercise, gets
interested, and goes out to look at more examples. These show a very
clear pattern, and after following this pattern quite some way out, he
finally believes the pattern enough to start searching for a proof:
which he also finds.</p>
<p>Now, this is one central heuristic pattern in mathematics. Calculate
examples. Look at the examples. All of a sudden a pattern appears, and
you think up a reason why such a pattern could be there. While proving
the "correctness", if you may, of the pattern, you'll either end up
formulating in enough detail what's going on so that you may possibly
expand the statement to cover even more -- or you may realize that it
does not hold and find a way to construct a counterexample. Either way,
the mass of knowledge increases.</p>
<p>Now, what does this have to do with programming languages?</p>
<div class="line-block">
<div class="line">I started looking at Haskell due to a sequence of blogposts from
<a class="reference external" href="http://sigfpe.blogspot.com">sigfpe</a>, who was describing Haskell
programming from a highly algebraic point of view. And even after my
involving myself in the subject, one thing that stands out is the
relatively low distance between thought expressed in my ordinary
day-to-day mathematical discourse, and thought expressed in Haskell
code. For the ubiquitous example, let's take the factorial function.
Mathematically, I think about factorials as</div>
<div class="line-block">
<div class="line">"n! is the product of all integers from 1 to n"</div>
<div class="line">or</div>
<div class="line">"n!=Γ(n-1)" (though this not so often, I must admit)</div>
</div>
</div>
<div class="line-block">
<div class="line">Then again, "the product of all", when I introduce it strictly for the
first time tends to be expressed along the lines of</div>
<div class="line-block">
<div class="line">"Π:sub:<cite>n=1</cite><sup>k</sup> f(n) is f(k)*Π<sub>n=1</sub><sup>k-1</sup>" or
something along those lines.</div>
</div>
</div>
<p>So, what happens if I want to start playing around a bit with
factorials, and see whether I can say anything interesting about them?</p>
<p>In C or similar languages, I'll fire up an editor, and a command shell,
and start writing code that looks something like</p>
<pre class="literal-block">
#include
int main(int argc, void* argv) {
int i, j, f=1;
scanf("%d", &i);
for(j=1; j
and I will spend more time writing scaffolding and reworking the problem into something I can write in C than I will actually working with my mathematical intuition about it.
C++, Java, PHP, Perl -- it won't be much different in any of these. None of them have an interpreter I like, none of them differ all too much from the mindset of C. (Yes, I say this knowing full well the difference between imperative and object-oriented programming...)
In functional languages, the way I write the code is very much more reminiscent of the way I define it in my head. I could dig out enough Lisp or Scheme from the back of my mind to give you an example, but it's getting late and I couldn't be bothered. So instead I'll go straight on to the thingie I want to praise.
In Haskell, I'll fire up my interpreter, and write at the prompt something very much like
> let fac 0 = 1; fac n = n*fac (n-1)
</pre>
<p>and then start firing off actual calculations into the box.</p>
<pre class="literal-block">
> (fac 8)/((fac 3)*(fac 5))
56.0
</pre>
<p>During my time so far, I've used Haskell quite a bit in my research to
generate lists of testcases. Not necessarily because Haskell is the best
possible language for it, but because I could very quickly write down my
thoughts, get working code out of it, and just go ahead and do what I
wanted. Most languages I've worked with have their uses - and I'm not
out to bash anything right now. But with Haskell, I've had a much lower
threshold thought-to-code, and much less distance between my coder head
and my mathematician head, than with anything else I've seen so far.</p>
Weekly Report: Parties and lectures2006-10-22T11:05:00+02:00Michitag:blog.mikael.johanssons.org,2006-10-22:weekly-report-parties-and-lectures.html<p>The term has started. In full force. No seating in the lunch cafeteria,
lot's of people all over the place, lot's and lot's of new students, and
lectures and examples classes kicking off all over the place.</p>
<p>I'm leading an example class this year: linear algebra and geometry part
1 for the maths majors. One of six different examples class sessions for
the same course. And apparently, my good tradition of going out drinking
with my students keeps up: I went to the exchange students term-start
party last friday, and while partying with the swedes and finns of the
scandinavian Stammtisch on the dance floor, a girl squeezes through the
crowds past us and asks me in passing if I'm not the examples class
teacher. Turns out she registered for my class.</p>
<p>First contact with the students is on tuesday morning.</p>
<p>In other news, it was 200 years since Napoleon fought the Prussians and
Saxons at Jena-Auerstedt last weekend, and so I went up to the old
battle ground and watched a reenactment. The French won. Again. But it
was interesting to see a whole field get so covered in gunpowder smoke
that you couldn't see to the other side.</p>
<p>I have submitted my first article to a journal as a result of my PhD.
And I have received a first article to review for the Mathematical
Reviews. Now I only have to read the thing. And write about it.</p>
<p>For those interested, my next Sweden visit is over christmas. I'm
landing at Arlanda on the 20th of December, and leaving 10th of January.</p>
A-infinity and Hochschild cocycles2006-10-20T15:15:00+02:00Michitag:blog.mikael.johanssons.org,2006-10-20:a-infinity-and-hochschild-cocycles.html<p>This blogpost is a running log of my thoughts while reading a couple of
papers by Bernhard Keller. I recommend anyone reading this and feeling
interest to hit the arXiv and search for his introductions to
A<sub>∞</sub>-algebras. Especially math.RA/9910179 serves as a basis for
this post.</p>
<p>If you do enough of a particular brand of homotopy theory, you'll sooner
or later encounter algebras that occur somewhat naturally, but which
aren't necessarily associative as such, but rather only associative up
to homotopy. The first obvious example is that of a loop space, viewed
as a group: associativity only holds after you impose equivalence
between homotopic loops.</p>
<p>In fact, if we start with a based space -- the normal situation in
original homotopy of topological spaces -- and start looking at the
space of all loops based in the basepoint of the space. A loop is simply
an image of the circle in the topological space, with the circle based
in 0 and running up to 2π. Then we know we can compose loops, by just
running through first one on half the time, and then the other on the
second half the time. So from 0 to π we run through one loop, and then
from π to 2π we run through the other.</p>
<p>However, we now get genuinely different loops -- seen as functions --
from the different ways to compose three loops. Let the loops be called
<em>f,g,h</em>. Then <em>f(gh)</em> is a different loop from <em>(fg)h</em>, since in the
first loop, <em>f</em> happens from 0 to π and in the second, <em>f</em> happens only
from 0 to π/2. But we were doing homotopy! So we have some continuous
map, named the homotopy, that goes from <em>f(gh)</em> to <em>(fg)h</em>. So then
everything is alright after all. We don't have strict associativity in
the loop space, but we have associativity up to homotopy.</p>
<div class="line-block">
<div class="line">But now, if we look at the different ways to compose four loops, we
get odd things happening. We'd have <em>f(g(hi))</em> and <em>((fg)h)i</em> as to
extremal versions, and then two different ways to move between them.
One way is to go</div>
<div class="line-block">
<div class="line"><em>f(g(hi)) → (fg)(hi) → ((fg)h)i</em> and the other one is to go</div>
<div class="line"><em>f(g(hi)) → f((gh)i) → (f(gh))i → ((fg)h)i</em></div>
</div>
</div>
<p>So we're stuck in the same place again, but with a higher level set of
problems. Not really a big problem: we'll simply require there to be a
homotopy between these two paths. So we slot in a continuous function
that takes care of that.</p>
<p>But then, if we look at the ways to compose five loops, we end up
getting homotopies that form a spherical shell, and the same problem
that we can do things in different way. But we can always make sure
these are homotopical as well. And so on. The different homotopies
needed are called <em>associahedra</em> and were introduced by Stasheff. A
topological space that admits all these is called an A<sub>∞</sub>-space.
Moreover, if a topological space admits an A<sub>∞</sub>-structure, then
it is homotopy equivalent to a loop space of something.</p>
<p>But hang on a second. I'm doing algebra, not topology. What good is
this?</p>
<p>Well, we certainly have a concept of associativity for algebras. And we
do, once we start juggling the right kind of objects, have a concept of
homotopies in an algebraic setting. So, we'll try to mimic all of this
but with a suitable setting to be able to talk about algebras that are
simultaneously chain complexes.</p>
<div class="section" id="a-infinity-algebras-enter-the-stage">
<h2>A-infinity algebras enter the stage</h2>
<p>So, we'll want an algebra over some field <em>k</em>. We'll start of gently,
introducing it first as a graded vectorspace
[tex]V=\bigoplus_{i\in\mathbb Z} V_i[/tex]; and we'll call
[tex]V_p[/tex] the component of degree p. The category of graded vector
spaces come equipped with one rather natural endofunctor: the suspension
S. It works by [tex](SV)_p=V_{p+1}[/tex]. An <em>n</em>-ary operator of
degree <em>k</em> on a graded vector space <em>V</em> is a family of maps
[tex](V^{\otimes n})_{i-k}\to V_i[/tex].</p>
<div class="line-block">
<div class="line">We want to tune a family of maps corresponding to the composition and
higher homotopies in the topological situation, as well as handling
the differential; which we want there just because we're doing
homological algebra and see a graded space. So we'll want a
differential</div>
<div class="line-block">
<div class="line">[tex]d\colon V\to V[/tex] of degree 1, and a multiplication
[tex]\mu\colon V\otimes V\to V[/tex] of degree 0. The homotopy of
the associativity of the multiplication will be some 3-ary map h such
that [tex]\mu(1\otimes\mu+\mu\otimes1)=hd+dh[/tex], whereby the
lefthand side has degree 0, and the righthand side has degree 1 more
than the degree of h, since the differential has degree 1. Thus, for
each of the higher homotopies, we'll fall one degree step. All in all,
the <em>n</em>-ary higher homotopy operator will have degree 2-<em>n</em>.</div>
</div>
</div>
<div class="line-block">
<div class="line">We note that [tex](V^{\otimes n})_k = (SV^{\otimes n})_{k-n}[/tex]
(just check the degrees on both sides...), and so that this, maybe
slightly artificial looking, degree condition on the higher homotopies
can be handled rather neatly. Each <em>n</em>-ary homotopy is a map</div>
<div class="line-block">
<div class="line">[tex]V^{\otimes n}\to V[/tex] of degree 2-<em>n</em>. That means, that
we can just as well view it as a map</div>
<div class="line">[tex](SV)^{\otimes n}\to SV[/tex] of some degree; since the <em>S</em>
operator only changes the degrees within <em>V</em>. We can find the degree
of this map by looking at where the degree 0 slice of
[tex](SV)^{\otimes n}[/tex] ends up. This is, in reality the degree
<em>n</em> slice of [tex]V^{\otimes n}[/tex], and thus ends up in degree 2
of <em>V</em>, which is to say that it ends up in degree 1 of <em>SV</em>. So the
degree 2-<em>n</em> map defined on <em>V</em> turns into a degree 1 map of <em>SV</em>;
which is rather neat.</div>
</div>
</div>
<p>So, each interesting map -- let's call them all [tex]m_i[/tex] is a
degree 1 map [tex](SV)^{\otimes n}\to SV[/tex]. This family of maps is
a part of the structure definition of an A<sub>∞</sub>-algebra. The rest
of its definition is the set of properties we want these maps to
fulfill. And what would those be?</p>
<div class="line-block">
<div class="line">First of all, we want it to be a chain complex when we're done, so
that we can use it to do homological stuff. So we'll want
[tex]m_1^2=0[/tex]. Next, we'll want it to fulfill the Leibniz rule,
so that it really does behave like a differential. So [tex]m_1\circ
m_2 = m_2(1\otimes m_1+m_1\otimes 1)[/tex]. And we want the
associativity to hold - but only up to homotopy. That <em>f</em> and <em>g</em> are
homotopic means that there is some chain map <em>h</em> such that <em>f-g=hd+dh</em>
for the differentials on the chain complexes that <em>f</em> and <em>g</em> go
between. Now, the differential on <em>V</em> we take to be our
[tex]m_1[/tex], and the differential on [tex]V^{\otimes n}[/tex] is
induced from this as the sum [tex]\sum_{i=1}^n 1^{\otimes
i-1}\otimes m_1\otimes 1^{\otimes n-i}[/tex]. Now, the various
versions of associating three elements under this multiplication we've
defined are [tex]m_2(1\otimes m_2)[/tex] and [tex]m_2(m_2\otimes
1)[/tex]. Both of these are chain maps [tex]SV^{\otimes 3}\to
SV[/tex] (chain maps since the Leibniz rule holds). So we'll want
[tex]m_2(1\otimes m_2-m_2\otimes 1)=dm_3+m_3d[/tex], where the
first <em>d</em> is just [tex]m_1[/tex] and the second is
[tex]m_1\otimes1\otimes1+1\otimes m_1\otimes1+1\otimes1\otimes
m_2[/tex]. If we go on with all the higher associahedra, and choose
our signs in a neat way, we'll end up with the generic condition that</div>
<div class="line-block">
<div class="line">[tex]\sum_{n=r+s+t}(-1)^{r+st}m_{r+t+1}\circ(1^{\otimes
r}\otimes m_s\otimes1^{\otimes t})=0[/tex]</div>
</div>
</div>
<p>So that gives us some sort of abstract feel for what an
A<sub>∞</sub>-algebra is. Do we know any examples? Can we construct any?</p>
</div>
<div class="section" id="examples">
<h2>Examples</h2>
<p>First of all, any algebra is a differential graded algebra concentrated
in degree 0. So if [tex]V_i=0[/tex] for all non-zero <em>i</em>, and
[tex]m_2[/tex] is the only non-zero structure map, then we recover the
normal associative algebras.</p>
<p>A differential graded algebra is an A<sub>∞</sub>-algebra, associative and
not only up to homotopy. This is the same as all higher homotopies
vanishing, so it is the case where [tex]m_1,m_2[/tex] are the only
non-zero maps.</p>
<div class="section" id="hochschild-cohomology">
<h3>Hochschild cohomology</h3>
<p>Inspired by the success of simplicial complexes in algebraic topology,
the inspiration pops up to try and introduce similar things in other
theories. Thus, the idea of simplicial objects appears - which are
defined as a functor from the category of finite ordered sets of
integers with nondecreasing monotone functions as the arrows. In this
setting, face maps and degeneracy maps get introduced: the face maps
miss precisely one element, and the degeneracy maps duplicate precisely
one element. This corresponds closely to our intuition of faces of
simplices and degenerate simplices. With a simplicial objects theory,
the differential in a chain complex ends up being just the sum of faces
with appropriate signs. The whole theory varies far more than here
indicated though.</p>
<div class="line-block">
<div class="line">Suppose now we want to construct a cohomology theory which includes
this at its core. One method of arriving there is the theory of
Hochschild cohomology. This is built basically the same way as any
other cohomology theory, with the salient difference that the
differentials and degeneracy maps are chosen differently. So to a
bimodule <em>M</em> over a ring <em>R</em>, we set [tex]C_0=M[/tex] and
[tex]C_i=M\otimes R^{\otimes i}[/tex].</div>
<div class="line-block">
<div class="line">The face maps on this complex is defined as</div>
<div class="line">[tex]\partial_0(m\otimes r_1\otimes\dots\otimes
r_n)=mr_1\otimes r_2\otimes\dots\otimes r_n[/tex]</div>
<div class="line">[tex]\partial_i(m\otimes r_1\otimes\dots\otimes
r_n)=m\otimes\dots\otimes r_ir_{i+1}\otimes\dots\otimes
r_n[/tex]</div>
<div class="line">[tex]\partial_n(m\otimes r_1\otimes\dots\otimes
r_n)=r_nm\otimes\dots\otimes r_{n-1}[/tex]</div>
<div class="line">and the degeneracy maps just slot in a [tex]\otimes 1\otimes[/tex]
at the appropriate index.</div>
</div>
</div>
<div class="line-block">
<div class="line">We build a complex from this by just putting
[tex]d=\sum(-1)^i\partial_i[/tex]. The homology of the resulting
chain complex is called the Hochschild homology
[tex]HH_*(M,R)[/tex]. Dualizing everything, we get a cochain complex
of multilinear maps [tex]R^n\to M[/tex], and the face maps</div>
<div class="line-block">
<div class="line">[tex]\partial^0f(r_0,\dots,r_n)=r_0f(r_1,\dots,r_n)[/tex]</div>
<div class="line">[tex]\partial^if(r_0,\dots,r_n)=f(r_0,\dots,r_ir_{i+1},\dots,r_n)[/tex]</div>
<div class="line">[tex]\partial^nf(r_0,\dots,r_n)=f(r_0,\dots,r_{n-1})r_n[/tex]</div>
<div class="line">and the degeneracy maps again just inserting a 1 at the appropriate
position. The homology of the chain complex we get from
[tex]d=\sum(-1)^i\partial^i[/tex] has homology the Hochschild
cohomology [tex]HH^*(R,M)[/tex].</div>
</div>
</div>
<p>Now, let's take some associative algebra <em>B</em>, and look at the graded
algebra [tex]A=B[\epsilon]/\epsilon^2[/tex] with
[tex]|\epsilon|=2-N[/tex]. Note to the attentive reader. This
construction is very reminiscent of a graded version of what
<a class="reference external" href="http://sigfpe.blogspot.com">sigfpe</a> is doing with <a class="reference external" href="http://sigfpe.blogspot.com/2006/09/practical-synthetic-differential.html">practical
synthetic differential geometry</a>.
We furthermore pick some multilinear map [tex]c\colon B^N\to B[/tex],
and define [tex]m_2[/tex] to be normal multiplication in A, and
[tex]m_i[/tex]=0 for all other i. Now, we can get a new A<sub>∞</sub>
structure by setting [tex]\mu_i=m_i[/tex] for all [tex]i\neq N[/tex]
and [tex]\mu_N=m_N+\epsilon c[/tex].</p>
<div class="line-block">
<div class="line">Suppose first that we picked our [tex]N=2[/tex]. Then we get, from the
A<sub>∞</sub>-conditions that</div>
<div class="line-block">
<div class="line">[tex]\mu_1^2=0[/tex] (sure, [tex]\mu_1=0[/tex] anyway...)</div>
<div class="line">[tex]\mu_1\circ \mu_2=\mu_2(1\otimes \mu_1+\mu_1\otimes
1)[/tex] (no problem. [tex]\mu_1[/tex] is still 0)</div>
<div class="line">[tex]\mu_2(\mu_2\otimes 1-1\otimes\mu_2)=0[/tex], which we
can expand to, and insert values to get</div>
<div class="line">[tex](\mu_2(\mu_2\otimes-1\otimes\mu_2))(x,y,z)=[/tex]</div>
<div class="line">[tex]x(yz)-(xy)z+\epsilon(c(x,y)z-xc(y,z))+\epsilon(c(xy,z)-c(x,yz))+\epsilon^2(\cdots)[/tex]</div>
<div class="line">where <em>x(yz)-(xy)z=0</em> since <em>B</em> is associative anyway, and by
multilinearity, we note that <em>c(x,y)z=c(x,yz)</em> and <em>xc(y,z)=c(xy,z)</em>,
so everything cancels out. Thus this is fulfilled by the preconditions
and does not bring any additional information.</div>
</div>
</div>
<div class="line-block">
<div class="line">Being a Hochschild 2-cocycle, however, only mandates that
[tex]dc=0[/tex], which using the definitions means that
<em>dc(x,y,z)=xc(y,z)-c(xy,z)+c(x,yz)-c(x,y)z=0</em>, which fits perfectly
with our observation. Suppose now that we pick some other
[tex]N[/tex]. Then, the only relations that survive, since all but two
operations vanish, are those where [tex]\mu_2[/tex] and
[tex]\mu_N[/tex] are combined. This occurs precisely for the
relations of degrees <em>N+1</em> and <em>N(N-1)</em>. For the latter case, we
combine [tex]\epsilon c[/tex] with [tex]\epsilon c[/tex], which
gives us something multiplied with [tex]\epsilon^2[/tex], thus
vanishing. For the first relation, we receive instead</div>
<div class="line-block">
<div class="line">[tex]m_2((-1)^N\epsilon c\otimes1-1\otimes\epsilon
c)+\sum_{i=1}^{N}\epsilon c(1^{\otimes i-1}\otimes m_2\otimes
1^{\otimes N-i})[/tex], or if we translate it to something readable,
involving an application to elements, it turns into</div>
<div class="line">[tex]r_0c(r_1,\dots,r_N)+(-1)^Nc(r_1,\dots,r_{N-1})r_N+\sum_{i=1}^N(-1)^ic(r_0,\dots,r_ir_{i+1},\dots,r_N)[/tex]</div>
<div class="line">which looks very much like the Hochschild differential on cochains.</div>
</div>
</div>
<p>So, this is an A<sub>∞</sub>-algebra <em>precisely</em> when the chosen map <em>c</em>
is a Hochschild cocycle.</p>
<p>If you've read this far, I'm impressed. If you've understood what I've
been saying, I'm even more impressed - and probably will run across you
sooner or later at some conference or other.</p>
</div>
</div>
Computational Group Theory in Haskell (1 in a series)2006-10-18T15:37:00+02:00Michitag:blog.mikael.johanssons.org,2006-10-18:computational-group-theory-in-haskell-1-in-a-series.html<p>This term, I'm listening to a lecture course on Computational Group
Theory. As a good exercise, I plan to implement everything we talk about
as Haskell functions as well.</p>
<p>The first lecture was mainly an introduction to the area, ending with a
very na</p>
Academic firsts2006-10-10T12:22:00+02:00Michitag:blog.mikael.johanssons.org,2006-10-10:academic-firsts.html<p>I just submitted a paper to a journal.</p>
<p>Based on research I have done during my time as a PhD student.</p>
<p>Wish me luck.</p>
<p><em>Update:</em>If you want to read the paper, I suggest you go look at
<a class="reference external" href="http://arxiv.org/abs/math.GR/0610374">arXiv:math.GR/0610374</a>.</p>
Weekly Report: Settling down again2006-10-08T11:08:00+02:00Michitag:blog.mikael.johanssons.org,2006-10-08:weekly-report-settling-down-again.html<p>I get the feeling that my pledge to write the weekly reports regularily
has been less than successful. So I'll try to renew that pledge: I shall
try to keep up the regularity of my weekly report.</p>
<p>Since last authored, I have been running a mathematics camp for 10th
grade kids in mathematics-oriented schools. There are (apparently) 3 or
4 of those in Th</p>
OpenGL programming in Haskell, a tutorial (Part 2)2006-09-19T17:55:00+02:00Michitag:blog.mikael.johanssons.org,2006-09-19:opengl-programming-in-haskell-a-tutorial-part-2.html<p>As we left off the <a class="reference external" href="http://blog.mikael.johanssons.org/archive/2006/09/opengl-programming-in-haskell-a-tutorial-part-1/">last
installment</a>,
we were just about capable to open up a window, and draw some basic
things in it by giving coordinate lists to the command renderPrimitive.
The programs we built suffered under a couple of very infringing and
ugly restraints when we wrote them - for one, they weren't really very
modularized. The code would have been much clearer had we farmed out
important subtasks on other modules. For another, we never even
considered the fact that some manipulations would not necessarily be
good to do on the entire picture.</p>
<div class="section" id="some-modules">
<h2>Some modules</h2>
<p>To deal with the first problem, let's break apart our program a little
bit, forming several more or less independent code files linked together
to form a whole.</p>
<p>First off, HelloWorld.hs - containing a very generic program skeleton.
We will use our module Bindings to setup everything else we might need,
and tie them to the callbacks.</p>
<pre class="literal-block">
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
import Bindings
main = do
(progname,_) <- getArgsAndInitialize
createWindow "Hello World"
displayCallback $= display
reshapeCallback $= Just reshape
keyboardMouseCallback $= Just keyboardMouse
mainLoop
</pre>
<p>Then Bindings.hs - our switchboard</p>
<pre class="literal-block">
module Bindings (display,reshape,keyboardMouse) where
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
import Display
reshape s@(Size w h) = do
viewport $= (Position 0 0, s)
keyboardMouse key state modifiers position = return ()
</pre>
<p>We're going to be hacking around a LOT with the display function, so
let's isolate that one to a module of it's own: Display.hs</p>
<pre class="literal-block">
module Display (display) where
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
import Cube
display = do
clear [ColorBuffer]
cube (0.2::GLfloat)
flush
</pre>
<p>And a first utility module, containing the gritty details of drawing the
cube [tex][-w,w]^3[/tex], called Cube.hs</p>
<pre class="literal-block">
module Cube where
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
cube w = do
renderPrimitive Quads $ do
vertex $ Vertex3 w w w
vertex $ Vertex3 w w (-w)
vertex $ Vertex3 w (-w) (-w)
vertex $ Vertex3 w (-w) w
vertex $ Vertex3 w w w
vertex $ Vertex3 w w (-w)
vertex $ Vertex3 (-w) w (-w)
vertex $ Vertex3 (-w) w w
vertex $ Vertex3 w w w
vertex $ Vertex3 w (-w) w
vertex $ Vertex3 (-w) (-w) w
vertex $ Vertex3 (-w) w w
vertex $ Vertex3 (-w) w w
vertex $ Vertex3 (-w) w (-w)
vertex $ Vertex3 (-w) (-w) (-w)
vertex $ Vertex3 (-w) (-w) w
vertex $ Vertex3 w (-w) w
vertex $ Vertex3 w (-w) (-w)
vertex $ Vertex3 (-w) (-w) (-w)
vertex $ Vertex3 (-w) (-w) w
vertex $ Vertex3 w w (-w)
vertex $ Vertex3 w (-w) (-w)
vertex $ Vertex3 (-w) (-w) (-w)
vertex $ Vertex3 (-w) w (-w)
</pre>
<p>Now, compiling this entire section with the command</p>
<pre class="literal-block">
ghc --make -package GLUT HelloWorld.hs -o HelloWorld
</pre>
<p>compiles and links each module needed, and produces, in the end, an
executable to be used. There we go! Much more modularized, much smaller
and simpler bits and pieces. And - an added boon - we won't normally
need to recompile as much for each change we do.</p>
<p>This skeletal program will look like <img alt="image0" src="http://mikael.johanssons.org/images/hopengl/tut2/skeleton.png" />.</p>
</div>
<div class="section" id="local-transformations">
<h2>Local transformations</h2>
<p>One of the core reasons I started to write this tutorial series was that
I wanted to figure out why <a class="reference external" href="http://www.tfh-berlin.de/~panitz/hopengl/skript.html">Panitz'
tutorial</a>
didn't work for me. The core explanation is simple - the names of some
of the functions used has changed since he wrote them. Thus, the
matrixExcursion in his tutorial is nowadays named preservingMatrix. This
may well change further - though I hope it won't - in which case this
tutorial will be painfully out of date as well.</p>
<p>The idea of preservingMatrix, however, is to take a small piece of
drawing actions, and perform them independent of the transformations
going on outside that small piece. For demonstration, let's draw a bunch
of cubes, shall we?</p>
<p>We'll change the rather boring display subroutine in Display.hs into one
using preservingMatrix to modify each cube drawn individually, giving a
new Display.hs:</p>
<pre class="literal-block">
module Display (display) where
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
import Cube
points :: [(GLfloat,GLfloat,GLfloat)]
points = map (\k -> (sin(2*pi*k/12),cos(2*pi*k/12),0.0)) [1..12]
display = do
clear [ColorBuffer]
do mapM_ (\(x,y,z) -> preservingMatrix $ do
color $ Color3 x y z
translate $ Vector3 x y z
cube (0.1::GLfloat)
) points
flush
</pre>
<p>Say... Those points on the unit circle might be something we'll want
more of. Let's abstract some again! We'll break them out to a Points.hs.
We'll have to juggle a bit with the typesystem to get things to work
out, and in the end we get</p>
<pre class="literal-block">
module Points where
import Graphics.Rendering.OpenGL
points :: Int -> [(GLfloat,GLfloat,GLfloat)]
points n' = let n = fromIntegral n' in map (\k -> let t = 2*pi*k/n in (sin(t),cos(t),0.0)) [1..n]
and then we get the Display.hs
module Display (display) where
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
import Cube
import Points
display = do
clear [ColorBuffer]
mapM_ (\(x,y,z) -> preservingMatrix $ do
color $ Color3 ((x+1.0)/2.0) ((y+1.0)/2.0) ((z+1.0)/2.0)
translate $ Vector3 x y z
cube (0.1::GLfloat)
) $ points 7
flush
</pre>
<div class="line-block">
<div class="line">where we note that we need to renormalize our colours to get them
within the interval [0,1] from the interval [-1,1] in order to get
valid colour values. The program looks like</div>
<div class="line-block">
<div class="line"><img alt="image1" src="http://mikael.johanssons.org/images/hopengl/tut2/7circle.png" /></div>
</div>
</div>
<p>The point of this yoga doesn't come apparent until you start adding some
global transformations as well. So let's! We add the line</p>
<pre class="literal-block">
scale 0.7 0.7 (0.7::GLfloat)
</pre>
<div class="line-block">
<div class="line">just after the clear [ColorBuffer], in order to scale the entire
picture. As a result, we get</div>
<div class="line-block">
<div class="line"><img alt="image2" src="http://mikael.johanssons.org/images/hopengl/tut2/7circlescaled.png" /></div>
</div>
</div>
<p>We can do this with all sorts of transformations - we can rotate the
picture, skew it, move the entire picture around. Using
preservingMatrix, we make sure that the transformations "outside" apply
in the way we'd expect them to.</p>
</div>
<div class="section" id="back-to-the-callbacks">
<h2>Back to the callbacks</h2>
<div class="section" id="animation">
<h3>Animation</h3>
<p>A lot of the OpenGL programming is centered around the program being
prepared to launch some sequenc when some event occurs. Let's use this
to build a rotating version of our bunch of points up there. In order to
do things over time, we're going to be using the global callbacks
idleCallback and timerCallback. So, we'll modify the structure of our
files a bit - starting from the top.</p>
<p>We'll need a new callback. And we'll also need a state variable of our
own, which in turn needs to be fed to all functions that may need to use
it. Incorporating these changes, we get a new HelloWorld.hs:</p>
<pre class="literal-block">
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
import Bindings
import Data.IORef
main = do
(progname,_) <- getArgsAndInitialize
createWindow "Hello World"
reshapeCallback $= Just reshape
keyboardMouseCallback $= Just keyboardMouse
angle <- newIORef 0.0
displayCallback $= (display angle)
idleCallback $= Just (idle angle)
mainLoop
</pre>
<div class="line-block">
<div class="line">Note the addition of an angle, and an idle. We need to feed the value
of angle both to idle and to display, in order for them to use it
accordingly. Now, we need to define idle somewhere - and since we keep
all the bits and pieces we modify a LOT in display, let's put it in
there.</div>
<div class="line-block">
<div class="line">Exporting it all the way requires us to change the first line of
Bindings.hs to</div>
</div>
</div>
<pre class="literal-block">
module Bindings (idle,display,reshape,keyboardMouse) where
</pre>
<p>Display.hs:</p>
<pre class="literal-block">
module Display (display,idle) where
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
import Data.IORef
import Cube
import Points
display angle = do
clear [ColorBuffer]
a <- get angle
rotate a $ Vector3 0 0 (1::GLfloat)
scale 0.7 0.7 (0.7::GLfloat)
mapM_ (\(x,y,z) -> preservingMatrix $ do
color $ Color3 ((x+1.0)/2.0) ((y+1.0)/2.0) ((z+1.0)/2.0)
translate $ Vector3 x y z
cube (0.1::GLfloat)
) $ points 7
flush
idle angle = do
a <- get angle
angle $= a + 0.1
</pre>
<p>Now, running this program makes a couple of different things painfully
obvious. One is that things flicker. Another is that our ring is
shrinking violently. The shrinking is due to our forgetting to reset all
our transformations before we apply the next, and the flicker is because
we're redrawing an entire picture step by step. Much smoother
animation'll be had if we use a double buffering technique. Now, this
isn't at all hard. We need to modify a few places - tell HOpenGL that we
want to do doublebuffering and also when we want to swap the ready drawn
canvas for the one on the screen. So, we modify, again, HelloWorld.hs:</p>
<pre class="literal-block">
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
import Data.IORef
import Bindings
main = do
(progname,_) <- getArgsAndInitialize
initialDisplayMode $= [DoubleBuffered]
createWindow "Hello World"
reshapeCallback $= Just reshape
keyboardMouseCallback $= Just keyboardMouse
angle <- newIORef 0.0
idleCallback $= Just (idle angle)
displayCallback $= (display angle)
mainLoop
</pre>
<p>and we also need to modify Display.hs to implement the bufferswapping.
While we're at it, we add the command loadIdentity, which resets the
modification matrix.</p>
<pre class="literal-block">
module Display (display,idle) where
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
import Data.IORef
import Cube
import Points
display angle = do
clear [ColorBuffer]
loadIdentity
a <- get angle
rotate a $ Vector3 0 0 (1::GLfloat)
scale 0.7 0.7 (0.7::GLfloat)
mapM_ (\(x,y,z) -> preservingMatrix $ do
color $ Color3 ((x+1.0)/2.0) ((y+1.0)/2.0) ((z+1.0)/2.0)
translate $ Vector3 x y z
cube (0.1::GLfloat)
) $ points 7
swapBuffers
idle angle = do
a <- get angle
angle $= a+0.1
postRedisplay Nothing
</pre>
<p>There we are! That looks pretty, doesn't it? Now, we could start adding
control to the user, couldn't we? Let's add some keyboard interfaces.
We'll start by letting the rotation direction change when we press
spacebar, and let the arrows displace the whole figure and + and -
increase/decrease the rotation speed.</p>
<p>Again, we're adding states, so we need to modify HelloWorld.hs</p>
<pre class="literal-block">
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
import Data.IORef
import Bindings
main = do
(progname,_) <- getArgsAndInitialize
initialDisplayMode $= [DoubleBuffered]
createWindow "Hello World"
reshapeCallback $= Just reshape
angle <- newIORef (0.0::GLfloat)
delta <- newIORef (0.1::GLfloat)
position <- newIORef (0.0::GLfloat, 0.0)
keyboardMouseCallback $= Just (keyboardMouse delta position)
idleCallback $= Just (idle angle delta)
displayCallback $= (display angle position)
mainLoop
</pre>
<p>Note that position is sent along to the keyboard as well as the display
callbacks. And in Bindings.hs, we give the keyboard callback actual
function</p>
<pre class="literal-block">
module Bindings (idle,display,reshape,keyboardMouse) where
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
import Data.IORef
import Display
reshape s@(Size w h) = do
viewport $= (Position 0 0, s)
keyboardAct a p (Char ' ') Down = do
a' <- get a
a $= -a'
keyboardAct a p (Char '+') Down = do
a' <- get a
a $= 2*a'
keyboardAct a p (Char '-') Down = do
a' <- get a
a $= a'/2
keyboardAct a p (SpecialKey KeyLeft) Down = do
(x,y) <- get p
p $= (x-0.1,y)
keyboardAct a p (SpecialKey KeyRight) Down = do
(x,y) <- get p
p $= (x+0.1,y)
keyboardAct a p(SpecialKey KeyUp) Down = do
(x,y) <- get p
p $= (x,y+0.1)
keyboardAct a p (SpecialKey KeyDown) Down = do
(x,y) <- get p
p $= (x,y-0.1)
keyboardAct _ _ _ _ = return ()
keyboardMouse angle pos key state modifiers position = do
keyboardAct angle pos key state
</pre>
<p>finally, in Display.hs we use the new information to accordingly redraw
the scene, specifically the now changing amount to change the current
angle with. Note that in order to avoid the placement of the circle to
be pulled in with all the other modifications we're doing, we do the
translation outside a preservingMatrix call.</p>
<pre class="literal-block">
module Display (display,idle) where
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
import Data.IORef
import Cube
import Points
display angle position = do
clear [ColorBuffer]
loadIdentity
(x,y) <- get position
translate $ Vector3 x y 0
preservingMatrix $ do
a <- get angle
rotate a $ Vector3 0 0 (1::GLfloat)
scale 0.7 0.7 (0.7::GLfloat)
mapM_ (\(x,y,z) -> preservingMatrix $ do
color $ Color3 ((x+1.0)/2.0) ((y+1.0)/2.0) ((z+1.0)/2.0)
translate $ Vector3 x y z
cube (0.1::GLfloat)
) $ points 7
swapBuffers
idle angle delta = do
a <- get angle
d <- get delta
angle $= a+d
postRedisplay Nothing
</pre>
</div>
</div>
<div class="section" id="summary">
<h2>Summary</h2>
<p>We now know how to modify only parts of a picture, and we also know how
to use the idle and the keyboardMouse callback to support animations and
keyboard input.</p>
<p>In order to somewhat limit the amount of typing I need to do, I'll give
links that give details on some of the themes we've touched upon.</p>
<div class="line-block">
<div class="line">First of all, the callbacks are described in more detail and with call
signatures at</div>
<div class="line"><a class="reference external" href="http://haskell.org/ghc/docs/latest/html/libraries/GLUT/Graphics-UI-GLUT-Callbacks-Global.html">http://haskell.org/ghc/docs/latest/html/libraries/GLUT/Graphics-UI-GLUT-Callbacks-Global.html</a>
for the global callbacks (menu systems, and timing/idle callbacks)</div>
<div class="line"><a class="reference external" href="http://haskell.org/ghc/docs/latest/html/libraries/GLUT/Graphics-UI-GLUT-Callbacks-Window.html">http://haskell.org/ghc/docs/latest/html/libraries/GLUT/Graphics-UI-GLUT-Callbacks-Window.html</a>
for the window-specific callbacks (display, reshape, keyboard&mouse,
visibility changes, window closing, mouse movement, spaceballs,
drawing tablets, joysticks and dial&button)</div>
</div>
<div class="line-block">
<div class="line">Furthermore, the various primitives for drawing are listed at
<a class="reference external" href="http://haskell.org/ghc/docs/latest/html/libraries/OpenGL/Graphics-Rendering-OpenGL-GL-BeginEnd.html">http://haskell.org/ghc/docs/latest/html/libraries/OpenGL/Graphics-Rendering-OpenGL-GL-BeginEnd.html</a>.</div>
<div class="line-block">
<div class="line">There are 3-dimensional primitives ready as well. These can be found
at
<a class="reference external" href="http://haskell.org/ghc/docs/latest/html/libraries/GLUT/Graphics-UI-GLUT-Objects.html">http://haskell.org/ghc/docs/latest/html/libraries/GLUT/Graphics-UI-GLUT-Objects.html</a></div>
</div>
</div>
<p>The flag I set to trigger double buffering is described among the GLUT
initialization methods, see
<a class="reference external" href="http://haskell.org/ghc/docs/latest/html/libraries/GLUT/Graphics-UI-GLUT-Initialization.html">http://haskell.org/ghc/docs/latest/html/libraries/GLUT/Graphics-UI-GLUT-Initialization.html</a>
for everything you can do there.</p>
<p>Next time, I figure I'll get around to do Mouse callbacks and 3d
graphics. We'll see.</p>
</div>
Hello, Planet Haskell2006-09-14T17:28:00+02:00Michitag:blog.mikael.johanssons.org,2006-09-14:hello-planet-haskell.html<p>Since the content rate of haskell-related posts is going up, the feed of
this blog will get added to <a class="reference external" href="http://planet.haskell.org">Planet
Haskell</a>. Hi, Planet!</p>
OpenGL programming in Haskell - a tutorial (part 1)2006-09-14T16:17:00+02:00Michitag:blog.mikael.johanssons.org,2006-09-14:opengl-programming-in-haskell-a-tutorial-part-1.html<p>After having failed following the <a class="reference external" href="http://www.tfh-berlin.de/~panitz/hopengl/skript.html">googled tutorial in HOpenGL
programming</a>, I
thought I'd write down the steps I actually can get to work in a
tutorial-like fashion. It may be a good idea to read this in parallell
to the tutorial linked, since Panitz actually brings a lot of good
explanations, even though his syntax isn't up to speed with the latest
HOpenGL at all points.</p>
<div class="section" id="hello-world">
<h2>Hello World</h2>
<p>First of all, we'll want to load the OpenGL libraries, throw up a
window, and generally get to grips with what needs to be done to get a
program running at all.</p>
<pre class="literal-block">
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
main = do
(progname, _) <- getArgsAndInitialize
createWindow "Hello World"
mainLoop
</pre>
<div class="line-block">
<div class="line">This code throws up a window, with a given title. Nothing more
happens, though. This is the skeleton that we'll be building on to.
Save it to HelloWorld.hs and compile it by running</div>
<div class="line-block">
<div class="line">`` ghc -package GLUT HelloWorld.hs -o HelloWorld``</div>
</div>
</div>
<p>However, as a skeleton, it is profoundly worthless. It doesn't even
redraw the window, so we should definitely make sure to have a function
that takes care of that in there somewhere. Telling the OpenGL-system
what to do is done by using state variables, and these, in turn are
handled by the datatype Data.IORef. So we modify our code to the
following</p>
<pre class="literal-block">
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
main = do
(progname, _) <- getArgsAndInitialize
createWindow "Hello World"
displayCallback $= clear [ ColorBuffer ]
mainLoop
</pre>
<p>This sets the global state variable carrying the callback function
responsible for drawing the window to be the function that clears the
color state. Save this to the HelloWorld.hs, recompile, and rerun. This
program no longer carries the original pixels along, but rather clears
everything out.</p>
<p>The displayCallback is a globally defined IORef, which can be accessed
through a host of functions defined in Data.IORef. However, deep within
the OpenGL code, there are a couple of definition providing the
interface functions $= and get to fascilitate interactions with these.
Thus we can do things like</p>
<pre class="literal-block">
height = newIORef 1.0
currentheight <- get height
height $= 1.5
</pre>
</div>
<div class="section" id="using-the-drawing-canvas">
<h2>Using the drawing canvas</h2>
<p>So, we have a window, we have a display callback that clears the canvas.
Don't we want more out of it? Sure we do. So let's draw some things.</p>
<pre class="literal-block">
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
myPoints :: [(GLfloat,GLfloat,GLfloat)]
myPoints = map (\k -> (sin(2*pi*k/12),cos(2*pi*k/12),0.0)) [1..12]
main = do
(progname, _) <- getArgsAndInitialize
createWindow "Hello World"
displayCallback $= display
mainLoop
display = do
clear [ColorBuffer]
renderPrimitive Points $ mapM_ (\(x, y, z)->vertex$Vertex3 x y z) myPoints
flush
</pre>
<p>Now, the important thing to notice in this codeextract is that last
line. It starts a rendering definition, gives the type to be rendered,
and then a sequence of function calls, each of which adds a vertex to
the rendering canvas. The statement is basically equivalent to something
along the lines of</p>
<pre class="literal-block">
renderPrimitive Points do
vertex Vertex3 ...
vertex Vertex3 ...
</pre>
<div class="line-block">
<div class="line">for appropriate triples of coordinate values at the appropriate
places. This results in the rendition here:</div>
<div class="line-block">
<div class="line"><img alt="image0" src="http://mikael.johanssons.org/images/hopengl/points.png" /></div>
</div>
</div>
<p>We can replace Points with other primitives, leading to the rendering
of:</p>
<div class="section" id="triangles">
<h3>Triangles</h3>
<div class="line-block">
<div class="line"><img alt="image1" src="http://mikael.johanssons.org/images/hopengl/triangles.png" /></div>
<div class="line-block">
<div class="line">Each three coordinates following each other define a triangle. The
last n mod 3 coordinates are ignored.</div>
<div class="line">Keyword Triangles</div>
</div>
</div>
</div>
<div class="section" id="triangle-strips">
<h3>Triangle strips</h3>
<div class="line-block">
<div class="line"><img alt="image2" src="http://mikael.johanssons.org/images/hopengl/trianglestrip.png" /></div>
<div class="line-block">
<div class="line">Triangles are drawn according to a "moving window" of size three, so
the two last coordinates in the previous triangle become the two first
in the next triangle.</div>
<div class="line">Keyword TriangleStrip</div>
</div>
</div>
</div>
<div class="section" id="triangle-fans">
<h3>Triangle fans</h3>
<div class="line-block">
<div class="line"><img alt="image3" src="http://mikael.johanssons.org/images/hopengl/trianglesfan.png" /></div>
<div class="line-block">
<div class="line">Triangle fans have the first given coordinate as a basepoint, and
takes each pair of subsequent coordinates to define a triangle
together with the first coordinate.</div>
<div class="line">Keyword TriangleFan</div>
</div>
</div>
</div>
<div class="section" id="lines">
<h3>Lines</h3>
<div class="line-block">
<div class="line"><img alt="image4" src="http://mikael.johanssons.org/images/hopengl/lines.png" /></div>
<div class="line-block">
<div class="line">Each pair of coordinates define a line.</div>
<div class="line">Keyword Lines</div>
</div>
</div>
</div>
<div class="section" id="line-loops">
<h3>Line loops</h3>
<div class="line-block">
<div class="line"><img alt="image5" src="http://mikael.johanssons.org/images/hopengl/lineloop.png" /></div>
<div class="line-block">
<div class="line">With line loops, each further coordinate defines a line together with
the last coordinate used. Once all coordinates are used up, an
additional line is drawn back to the beginning.</div>
<div class="line">Keyword LineLoop</div>
</div>
</div>
</div>
<div class="section" id="line-strips">
<h3>Line strips</h3>
<div class="line-block">
<div class="line"><img alt="image6" src="http://mikael.johanssons.org/images/hopengl/linestrip.png" /></div>
<div class="line-block">
<div class="line">Line strips are like line loops, only without the last link added.</div>
<div class="line">Keyword LineStrip</div>
</div>
</div>
</div>
<div class="section" id="quadrangles">
<h3>Quadrangles</h3>
<div class="line-block">
<div class="line"><img alt="image7" src="http://mikael.johanssons.org/images/hopengl/quad.png" /></div>
<div class="line-block">
<div class="line">For the Quads keyword, each four coordinates given define a
quadrangle.</div>
<div class="line">Keyword Quads</div>
</div>
</div>
</div>
<div class="section" id="quadrangle-strips">
<h3>Quadrangle strips</h3>
<div class="line-block">
<div class="line"><img alt="image8" src="http://mikael.johanssons.org/images/hopengl/quadstrip.png" /></div>
<div class="line-block">
<div class="line">And a Quadstrip works as the trianglestrip, only the window is 4
coordinates wide and steps 2 steps each time, so each new pair of
coordinates attaches a new quadrangle to the last edge of the last
quadrangle.</div>
<div class="line">Keyword QuadStrip</div>
</div>
</div>
</div>
<div class="section" id="polygon">
<h3>Polygon</h3>
<div class="line-block">
<div class="line"><img alt="image9" src="http://mikael.johanssons.org/images/hopengl/polygon.png" /></div>
<div class="line-block">
<div class="line">A Polygon is a filled line loop. Simple as that!</div>
<div class="line">Keyword Polygon</div>
</div>
</div>
<p>There are more things we can do on our canvas than just spreading out
coordinates. Within the command list constructed after a
renderPrimitive, we can give several different commands that control
what things are supposed to look like, so for instance we could use the
following</p>
<pre class="literal-block">
display = do
clear [ColorBuffer]
renderPrimitive Quads $ do
color $ (Color3 (1.0::GLfloat) 0 0)
vertex $ (Vertex3 (0::GLfloat) 0 0)
vertex $ (Vertex3 (0::GLfloat) 0.2 0)
vertex $ (Vertex3 (0.2::GLfloat) 0.2 0)
vertex $ (Vertex3 (0.2::GLfloat) 0 0)
color $ (Color3 (0::GLfloat) 1 0)
vertex $ (Vertex3 (0::GLfloat) 0 0)
vertex $ (Vertex3 (0::GLfloat) (-0.2) 0)
vertex $ (Vertex3 (0.2::GLfloat) (-0.2) 0)
vertex $ (Vertex3 (0.2::GLfloat) 0 0)
color $ (Color3 (0::GLfloat) 0 1)
vertex $ (Vertex3 (0::GLfloat) 0 0)
vertex $ (Vertex3 (0::GLfloat) (-0.2) 0)
vertex $ (Vertex3 ((-0.2)::GLfloat) (-0.2) 0)
vertex $ (Vertex3 ((-0.2)::GLfloat) 0 0)
color $ (Color3 (1::GLfloat) 0 1)
vertex $ (Vertex3 (0::GLfloat) 0 0)
vertex $ (Vertex3 (0::GLfloat) 0.2 0)
vertex $ (Vertex3 ((-0.2::GLfloat)) 0.2 0)
vertex $ (Vertex3 ((-0.2::GLfloat)) 0 0)
flush
</pre>
<div class="line-block">
<div class="line">in order to produce these four coloured squares</div>
<div class="line-block">
<div class="line"><img alt="image10" src="http://mikael.johanssons.org/images/hopengl/colorsquares.png" /></div>
<div class="line">where each color command sets the color for the next item drawn, and
the vertex commands give vertices for the four squares.</div>
</div>
</div>
</div>
</div>
<div class="section" id="callbacks-how-we-react-to-changes">
<h2>Callbacks - how we react to changes</h2>
<p>We have already seen one callback in action: displayCallBack. The
Callbacks are state variables of the HOpenGL system, and are called in
order to handle various things that may happen to the place the drawing
canvas lives. For a first exercise, go resize the latest window you've
used. Go on, do it now.</p>
<p>I bet it looked ugly, didn't it?</p>
<p>This is because we have no code handling what to do if the window should
suddenly change. Handling this is done in a callback, residing in the
IORef reshapeCallback. Similarily, repainting is done in
displayCallback, keyboard and mouse input is in keyboardMouseCallback,
and so on. We can refer to the HOpenGL documentation for <a class="reference external" href="http://haskell.org/ghc/docs/latest/html/libraries/GLUT/Graphics-UI-GLUT-Callbacks-Window.html">window
callbacks</a>
and for <a class="reference external" href="http://haskell.org/ghc/docs/latest/html/libraries/GLUT/Graphics-UI-GLUT-Callbacks-Window.html">global
callbacks</a>.
Window callbacks are things like display, keyboard and mouse, and
reshape. Global callbacks deal with timing issues (for those snazzy
animations) and the menu interface systems.</p>
<p>In order for a callback to possibly not be defined, most are typed
within the Maybe monad, so by setting the state variable to Nothing, a
callback can be disabled. Thus, setting callbacks is done using the
keyword Just. We'll add a callback for reshaping the window to our neat
code, changing the main function to</p>
<pre class="literal-block">
main = do
(progname, _) <- getArgsAndInitialize
createWindow "Hello World"
displayCallback $= display
reshapeCallback $= Just reshape
mainLoop
reshape s@(Size w h) = do
viewport $= (Position 0 0, s)
postRedisplay Nothing
</pre>
<p>Here, the code for the reshape function resizes the viewport so that our
drawing area contains the entire new window. After setting the new
viewport, it also tells the windowing system that something has happened
to the window, and that therefore, the display function should be
called.</p>
</div>
<div class="section" id="summary">
<h2>Summary</h2>
<p>So, in conclusion, so far we can display a window, post basic callbacks
to get the windowhandling to run smoothly, and draw in our window. Next
installment of the tutorial will bring you 3d drawing, keyboard and
mouse interactions, the incredible power of matrices and the ability to
rotate 3d objects for your leisure. Possibly, we'll even look into
animations.</p>
</div>
Monthly Report: Conference summer draws to an end2006-09-11T22:07:00+02:00Michitag:blog.mikael.johanssons.org,2006-09-11:monthly-report-conference-summer-draws-to-an-end.html<p>I haven't really been updating much here - and especially not the
category Weekly Report. Slowly, it's time to get around to it.</p>
<p>Now, there is a specific reason updates have been slow: I've been
travelling. A lot. With very varying internet access and even more
varying energy to spare for writing. It all started with two weeks of
vacation in Sweden - spending time with my lovely fiancee, meeting old
friends, and generally relaxing. She proposed (of sorts), and we're
getting married next summer. After the vacation, I went to Leeds, to the
Triangulated Categories workshop, and then back home to Jena - only to
go off again within just over a week, for the First Copenhagen Topology
Conference, tightly followed by a master's class in Morse theory lead by
Ralph Cohen, and a simultaneous workshop on Morse theory and string
topology.</p>
<p>The Copenhagen stay ended in neat symmetry with my fiancee joining me
and the mathematicians in Copenhagen for my birthday celebrations, and I
just about got back home and my life back in order when I'm writing
this.</p>
<p>The time between has seen both the frustration of feeling incapable to
begin thinking of mathematics again as well as the realization that
thoughts about my research problem show up spontaneously. Not that the
thoughts have lead to very much yet; but at least all that time in
lectures I couldn't understand wasn't wasted.</p>
<p>It's growing late. Weekly report updates will resume.</p>
Localisation and ring depth2006-09-05T13:24:00+02:00Michitag:blog.mikael.johanssons.org,2006-09-05:localisation-and-ring-depth.html<p>So, we're back at the point where I'm hesitating whether what I tried to
work out even made sense or not. So I'll do a write up of all the things
I feel certain about asserting, and ask my loyal readership to hunt my
errors for me. :)</p>
<p>Don't laugh. This is less embarrassing for me than asking my advisor
point blank.</p>
<p>We say that a (graded) commutative ring R has depth k if we can find a
sequence of elements [tex]x_1,\dots,x_k[/tex] with [tex]x_1[/tex]
not a zero-divisor, each [tex]x_i[/tex] not a zero-divisor in the
quotient [tex]R/\langle x_1,\dots,x_{i-1}\rangle[/tex] and
[tex]R/\langle x_1,\dots,x_k\rangle[/tex] a ring without non-zero
divisors. This definition, of course, being the first obvious point
where I may have screwed up.</p>
<p>Now, we know (from looking it up in Atiyah-MacDonald), that for SR the
localisation of R in a multiplicatively closed subset S, S(R/I)=SR/SI,
that injections carry over to injections, and that the annihilator over
SR of an element is the localisation of the annihilator of the element.</p>
<p>We now take S=R-p for p a prime ideal in R. So SR is the localisation in
a prime ideal in the usual sense (by slight abuse of notation). Since p
doesn't intersect S, Sp is prime in SR. Suppose we have an R-sequence of
length k, demonstrating the depth of R to be at least k.</p>
<p>The first element, [tex]x_1[/tex] of the R-sequence, is regular in R,
and thus multiplication by this element is an injection [tex]R\to
R[/tex]. Injections carry over to injections after localisation, so this
induces an injection [tex]SR\to SR[/tex], given by [tex]\frac
st\mapsto \frac{x_1s}t[/tex]. Since this is injective, the element
[tex]\frac{x_1}1[/tex] is regular in SR. So we can go to
[tex]SR/\langle\frac{x_1}1\rangle[/tex]. Now, localisation commutes
with quotients, so this is really [tex]S(R/\langle x_1\rangle)[/tex].</p>
<p>Now, [tex]x_2[/tex] is regular in [tex]R/\langle x_1\rangle[/tex],
and thus induces an injective map. This injective map carries over to an
injective map of localisations, and thus, with the same argument as
above, [tex]\frac{x_2}1[/tex] is regular over [tex]S(R/\langle
x_1\rangle)[/tex]. So we can carry over all of our regular sequence
from R to SR.</p>
<p>The only problem visible to me being that we may end up dividing out the
entire ring and ending up with a trivial ring earlier than we thought we
would. What would make that happen? Well, if our ideal in the quotient
at some point includes the ring, we lost the game. What could make this
happen? Then we'd need to have either 1 in the ideal, or 0 in the
localisation set. 0 in the localisation set, we get if a zerodivisor and
its annihilator both end up in there at the same time. I'm not going to
touch this case quite yet. Possibly not at all in this post. 1 in the
ideal, we get if one of the [tex]x_i[/tex] ends up being in S. So, for
everything to be perfectly fine this far, we REALLY want to avoid
straying outside of p.</p>
<p>And this'd, y'know, limit our choices of R-sequences a bit. But still,
we can get an R-sequence the depth of SR this way, I think. And then
there is the question of what happens outside this realm - and it would
seem that we can use the Krull dimension bound for that, thus arriving
at</p>
<p>[tex]\operator{depth} R \leq \operator{depth} SR + \dim R/p[/tex]</p>
<p>(which incidentially, with a certain set of conditions, is a theorem I
found in a set of notes that <a class="reference external" href="http://therealsqrt.blogspot.com">sqrt</a>
linked me to)</p>
Carry bits and group cohomology2006-07-27T16:05:00+02:00Michitag:blog.mikael.johanssons.org,2006-07-27:carry-bits-and-group-cohomology.html<p>Got treated today to a really nice workout in group cohomology; most of
which is well worth sharing, since seeing it done once gave me a lot of
insight.</p>
<div class="line-block">
<div class="line">So, if we pick [tex]\mathbb Z/10[/tex] and view it as the set
0,1,2,3,4,5,6,7,8,9 and with the group operation given by a*b = a+b %
10, then one standard 2-cocycle is the function</div>
<div class="line-block">
<div class="line">[tex]f(a,b) =
\begin{cases}1&a+b\geq10\\0&\text{else}\end{cases}[/tex]</div>
<div class="line">That this actually does form a cocycle would be the same as requiring</div>
<div class="line">f(b,c)-f(a*b,c)+f(a,b*c)-f(a,b)=0</div>
<div class="line">or regrouped</div>
<div class="line">f(a*b,c)+f(a,b)=f(a,b*c)+f(b,c)</div>
<div class="line">which is to say that the number of carry bits generated when adding
three digits does not depend on associativity.</div>
</div>
</div>
<div class="line-block">
<div class="line">This cocycle classifies the group extension</div>
<div class="line-block">
<div class="line">[tex]0 \to \mathbb Z/10 \to \mathbb Z/100 \to \mathbb Z/10 \to
0[/tex]</div>
<div class="line">with the first map taking [tex]a+10\mathbb Z\mapsto 10a+100\mathbb
Z[/tex] and the second taking [tex]b+100\mathbb Z\mapsto
b+10\mathbb Z[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">Now, this is a nontrivial extension - which is equivalent to it not
being a coboundary - by the following calculation:</div>
<div class="line-block">
<div class="line">Suppose f=dg. Then f(a,b)=g(a)+g(b)-g(a*b). So, since f(0,0)=0, we
get g(0)-g(0)+g(0)=0, so g(0)=0. For any b≤8, we also get
0=f(1,b)=g(b)-g(b+1)+g(1), so g(b+1)=g(b)+g(1) and thus by induction,
g(b)=bg(1) for all 0≤b≤9.</div>
<div class="line">But, now, 1=f(1,9)=g(9)-g(0)+g(1)=10g(1)=0, which is a contradiction.</div>
</div>
</div>
<p>Thus f is not a coboundary, and thus has a nontrivial cohomology class
associated to it.</p>
<p>One further useful observation is that f(a,b)=(a+b-a*b)/10.</p>
<p>Note, that if we started out with addition in base B for some arbitrary
B, not one single line of the above would have had anything but the most
trivial updates needed. Thus, this argument holds for any base, and not
only for base 10.</p>
<div class="line-block">
<div class="line">Now, suppose we pick ourselves some g in [tex]H^1(\mathbb
Z/B,\mathbb Z/B)=\operator{Hom}(\mathbb Z/B,\mathbb Z/B)[/tex] -
let's even decide to take the identity. So ga=a for any a in our
[tex]\mathbb Z/B[/tex]. Then the Bockstein of this is the image under
the connecting homomorphism under the long exact sequence in
cohomology induced by the short exact sequence</div>
<div class="line-block">
<div class="line">[tex]0 \to \mathbb Z/B \to \mathbb Z/B^2 \to \mathbb Z/B \to
0[/tex]</div>
<div class="line">with [tex]a+B\mathbb Z\mapsto Ba+B^2\mathbb Z[/tex] and
[tex]b+B^2\mathbb Z\mapsto b+B\mathbb Z[/tex] as above.</div>
</div>
</div>
<p>So the class [g] maps first to some set map [tex]\hat g\colon\mathbb
Z/B\to\mathbb Z/B^2[/tex], which we choose to be the one we create by
identifying [tex]\mathbb Z/B[/tex] with the integers 0,1,...,B-1. Thus
the images of g end up living in [tex]\mathbb Z/B^2[/tex] without us
having to do any extra work for it. This map, then, we can throw along
the differential, and then we get a cocycle; so that we know that it
takes values in [tex]B\mathbb Z/B^2\mathbb Z[/tex], so we can "just"
divide by B to get something in the right universe for our Bockstein
image.</p>
<div class="line-block">
<div class="line">Applied to our specific g, note that</div>
<div class="line-block">
<div class="line">[tex]d\hat g(a,b)=\hat g(a)-\hat g(a*b)+\hat g(b)=a+b-a*b[/tex]</div>
<div class="line">and so</div>
<div class="line">[tex]\beta g(a,b)=\frac{a+b-a*b}B[/tex]</div>
<div class="line">which is our carry bit map all over again.</div>
</div>
</div>
<div class="line-block">
<div class="line">If we happen to have a generic g, then we get a similar result; only
without being able to use [tex]\hat ga=ga=a[/tex] as comfortably. We
get, in the end, for a 1-cocycle g:</div>
<div class="line-block">
<div class="line">[tex]\beta g(a,b)=\frac{ga+gb-g(a*b)}B[/tex]</div>
</div>
</div>
<p>I wanted to talk about Bocksteins and the [tex]d_2[/tex] differential
in the LHS spectral sequence at this point, but I'm no longer certain of
what these, if anything, have to do with each other, so I won't.</p>
Todo and accomplishments2006-07-24T14:55:00+02:00Michitag:blog.mikael.johanssons.org,2006-07-24:todo-and-accomplishments.html<p>Accomplished:</p>
<ul class="simple">
<li>I am done with the coursework for the past semester. Sent off the
TeXed up solution sheets to the webmaster today.</li>
<li>My pattern observation seems to hold up surprisingly well - there
seems to be a theorem to fetch out there somewhere.</li>
<li>I have done most of the dishes. Go me!</li>
</ul>
<p>Todo:</p>
<ul class="simple">
<li>FOCUS! I need to learn triangulated categories. Preferably now. I
need to stop playing with Haskell and reading up on group cohomology
calculations. Preferably now.</li>
<li>After triangulated categories, there's a wealth of things to look
into. High priority are spectral sequences, further group cohomology,
diamond lemma, path algebra quotients, A<sub>∞</sub>, Massey products,
return to model categories. Which order these are done might
influence the contents of my PhD thesis significantly.</li>
<li>I really need to talk to the university sysadmins about the WLan
network.</li>
</ul>
<p>Progress may be found. Just around the corner. I need a donut.</p>
Weekly Report: Back up again2006-07-23T10:54:00+02:00Michitag:blog.mikael.johanssons.org,2006-07-23:weekly-report-back-up-again.html<p>The weekly reports have been dead for a while. Reason? The blog has been
dead for a while.</p>
<div class="section" id="hardware-woes">
<h2>Hardware woes</h2>
<p>The old computer running this website had some problem all of a sudden
about 3 weeks ago. These problems appeared as a complete lockdown of the
system - no response to anything. So my brother - with me on the other
side of a telephone, tried to reboot the box; but couldn't get it back
up online again. He was headed out to a LARP anyway within hours - and
so couldn't really do much more about it.</p>
<p>Right.</p>
<p>End result? I joined forces with a good friend of mine; we split
hardware costs for a slick new box - an Asus barebone box with a 64bit
processor and a gig of RAM. It received the harddrive and network
interface from the old box, and was with that good to go - only ..
processor architecture changed; and so for optimal performance, it'd be
a nice idea to actually use a new system install that took advantage of
the extra available bitwidth.</p>
<p>A completely failed OpenBSD installation later, we settled for Gentoo -
which now is humming along nicely. It even turned out to be surprisingly
easy to restore the old data on the new box.</p>
<p>There. I won't blather more about my hardware woes - but instead poke
about other themes for a while.</p>
</div>
<div class="section" id="party-party-party">
<h2>Party party party</h2>
<p>My students here for the course rock. The REALLY rock. We had the course
evaluations a couple of weeks ago - and apparently my exercise sessions
has been gooooood. At least the students thought so. I average in the
upper fifth of possible ratings in every single evaluation area. All of
them. I kick ass, yo!</p>
<p>Compared - both to the faculty mean, and to the workgroup results - I
outperform in all evaluated areas.</p>
<p>And, in addition to this cause for celebration, the end of the school
term (last week) comes with all manners of parties, celebrations, meets,
and activities. I think I had my first non-planned evening in about 2-3
weeks yesterday.</p>
</div>
<div class="section" id="summer-in-the-city">
<h2>Summer in the city</h2>
<p>The Jena climate, though, leaves a bit to wish for. I sincerely long to
get back to Sweden during the hottest summer weeks we have up there, in
order for me to cool down a bit. We have had 25°+ since beginning of
June (i.e. about 2 months of active time with weekly lectures, seminars
and exercise classes - in tropical bloody heat); and the last about 2
weeks or so, the temperatures have been in the upper 30°ies.</p>
<p>Horrible. Horrible, I tell you.</p>
<p>And of course, this heatwave has coincided with the World Cup, the Th</p>
</div>
Triangulated categories2006-07-17T15:47:00+02:00Michitag:blog.mikael.johanssons.org,2006-07-17:triangulated-categories.html<p>One predominant tendency in the algebra/category theory camp is to seek
out the minimal set of conditions needed to be able to perform a certain
technique, and then codifying this into a specific axiomatic system.
Thus, you only need to verify the axioms later on in order to get
everything else for free.</p>
<p>One such system is the theory of <em>triangulated categories</em>. This pops up
in homological algebra; where you like to work with Tor and Ext - both
of which turn out to be derived functors, generalizing the tensor
product and the homomorphism set respectively. With the construction of
the derived category, we can find a category, in which the tensor
product in that category is our Tor, and the hom sets is our Ext.</p>
<p>Once that entire yoga is worked through, you could start backtracking,
and pulling out all properties you used. Minimizing this set of
properties in one specific way leads to the concept of a triangulated
category, and in this post, I intend to retrace, using the definition of
a triangulated category, and looking more specifically on the case of
the derived category of the category of chain complexes of modules over
a fixed ring [tex]R[/tex] for a canonical example.</p>
<div class="section" id="triangulated-category">
<h2>Triangulated category</h2>
<p>Weibel gives a definition, due to Verdier, of a triangulated category.
It is an additive category - meaning that each Hom(A,B) is an abelian
group, and the group operation distributes over composition of morphisms
- equipped with an automorphism T called the translation functor, and
with a distinguished family of triangles (u,v,w) - by which we mean
[tex]u\colon A\to B[/tex], [tex]v\colon B\to C[/tex] and
[tex]w\colon C\to TA[/tex]. We further expect these to fulfill the
following axioms</p>
<ol class="arabic">
<li><p class="first">Every morphism u can be embedded into an exact triangle (u,v,w).
Furthermore, (1,0,0) is exact and exactness is closed under
isomorphisms of triangles.</p>
</li>
<li><p class="first">If (u,v,w) is an exact triangle, then so is (v,w,-Tu) and
(-T:sup:<cite>-1</cite>w,u,v).</p>
</li>
<li><p class="first">If (u,v,w) and (u',v',w') are exact triangles with f and g such that
gu=u'f, then there is a morphism h such that (f,g,h) is a morphism of
triangles. In clear text, this means that the following diagram has
the dotted arrow, such that everything commutes:</p>
<p>[tex]\begin{diagram}
A &\rTo^u& B &\rTo^v& C &\rTo^w& TA \\
\dTo^f && \dTo^g && \dDotsto^h && \dTo^{Tf} \\
A' &\rTo^{u'}& B' &\rTo^{v'}& C' &\rTo^{w'}& TA'
\end{diagram}[/tex]</p>
<p>Note that the leftmost square commutes by the condition on f and g.</p>
</li>
<li><p class="first">Suppose we have exact triangles through the triples A,B,C', A,C,B'
and B,C,A'. Then these determine an exact triangle on C',B',A'. This
is, due to one of the ways to visualize it, called the octahedral
axiom.</p>
</li>
</ol>
<div class="line-block">
<div class="line">Now, for an exact triangle (u,v,w), we can form the following diagram</div>
<div class="line-block">
<div class="line">[tex]\begin{diagram}</div>
<div class="line">A &\rTo^1& A &\rTo^0& 0 &\rTo^0& TA \\</div>
<div class="line">\dEq && \dTo^u && \dDotsto && \dEq \\</div>
<div class="line">A &\rTo^u& B &\rTo^v& C &\rTo^w& TA</div>
<div class="line">\end{diagram}[/tex]</div>
<div class="line">where the morphism [tex]0\to C[/tex] exists due to the axiom (3).
Thus, since the diagram commutes, vu=0.</div>
<div class="line">By forming similar diagrams for all the rotations, we also get wv=0
and (Tu)w=0. Thus, exact triangles behave as expected, if we view them
as representatives for long exact sequences.</div>
</div>
</div>
</div>
<div class="section" id="chain-complexes-form-a-triangulated-category">
<h2>Chain complexes form a triangulated category</h2>
<p>Proposition: For the category of chain complexes of R-modules, with
homotopy equivalence classes of chain maps as morphism, we have a
triangulated category structure.</p>
<div class="line-block">
<div class="line">Proof:</div>
<div class="line-block">
<div class="line">We set TA=A[-1], so that TA<sup>i</sup>=A<sup>i+1</sup>. We further
call a triangle (u,v,w) exact if it is isomorphic to a triangle
(u',v',d) on the complex triple (A',B',cone(u)). That is, the exact
triangles are (up to isomorphism) the triangles that go through a
mapping cone.</div>
</div>
</div>
<div class="section" id="digression-the-cone-of-a-chain-map">
<h3>Digression: The cone of a chain map</h3>
<p>Suppose [tex]f\colon B^*\to C^*[/tex] is a chain map. We define the
mapping cone cone(f) as the complex [tex]B[-1]\oplus C[/tex], with
degree n part [tex]B^{n+1}\oplus C^n[/tex] and with differential
[tex]d(b,c)=(-d(b),d(c)-f(b))[/tex].</p>
<div class="line-block">
<div class="line">Claim: The cone of the identity map cone(1) is split exact, with
splitting map [tex]s(b,c)=(-c,0)[/tex].</div>
<div class="line-block">
<div class="line">Proof: In order to verify this claim, we need to check that dsd=d and
that the complex is exact. For exactness, note that</div>
<div class="line">[tex]\ker d=\{(c_1,c_2):c_1\in C^{n+1}, c_2\in C^{n},
dc_1=0, dc_2-c_1=0\}[/tex]</div>
<div class="line">and thus it is the set of pairs (dc,c) for c in C<sup>n</sup></div>
<div class="line">[tex]\operator{im} d=\{(-dc_1,dc_2-c_1):c_1\in C^{n+1},
c_2\in C^n\}[/tex]</div>
<div class="line">so this is also a set of pairs (dc,de-c); and as we keep c fixed and
let e run through all possible values, we still will get pairs that
are already represented in our description of the kernel. So these
sets are equal, and thus the cone is exact.</div>
<div class="line">For split exactness, we calculate</div>
<div class="line">dsd(c,c')=ds(-dc,dc'-c)=d(c-dc',0)=(dc,dc'-c)=d(c,c')</div>
<div class="line">and thus everything behaves as it should, and we can conclude that
cone(1) is split exact.</div>
</div>
</div>
<div class="line-block">
<div class="line">Claim: [tex]f\colon C^*\to D^*[/tex] is null-homotopic if and only
if f extends to a map [tex](-s,f)\colon\operator{cone}(1_C)\to
D[/tex].</div>
<div class="line-block">
<div class="line">Proof: Suppose, first, that f is null-homotopic. Then there is some
[tex]s\colon C^n\to D^{n-1}[/tex] such that f=ds+sd. Then the map
(c,c')\mapsto f(c')-sc is a chain map
[tex]\operatorname{cone}(1_C)\to D[/tex] which restricts to f on
the second summand [tex]\{0\}\oplus C[/tex]. This holds, because
[tex](c,c')\mapsto f(c')-sc\mapsto df(c')-dsc[/tex] and
[tex](c,c')\mapsto(-dc,dc'-c)\mapsto sdc+fd(c')-fc[/tex] will be the
two corresponding paths through the corresponding commutative square.
Now, it remains to show that df(c')-dsc=sdc-f(c)+fd(c'). But fd=df
since f is a chain map, and sd+ds=f implies that -ds=sd-f, so
-dsc=sdc-f(c). This shows one direction.</div>
</div>
</div>
<p>For the other direction, suppose we can extend f to a chain map
[tex](c,c')\mapsto f(c')-sc[/tex]. Then the calculation above shows
that thus df(c')-fd(c')+f(c)-dsc-sdc=0. But f still is a chain map, so
the df(c')-fd(c') portions disappears. And the remaining equation says
that for any c in C, we need f(c)=(ds+sd)(c). Thus the s used in
extending f to a chain map from the cone is precisely a contracting
homotopy.</p>
<div class="line-block">
<div class="line">Given a map [tex]A\overset{u}{\to}B[/tex], we can construct a short
exact sequence</div>
<div class="line-block">
<div class="line">[tex]0\to B\to\operator{cone}(u)\to\operator A[-1]\to 0[/tex]</div>
<div class="line">with the maps [tex]b\mapsto (0,b)[/tex] and [tex](a,b)\mapsto
-a[/tex]. This is a s.e.s. of chain complexes, since the calculations
[tex]b\mapsto db\mapsto (0,db)[/tex] and [tex]b\mapsto
(0,b)\mapsto(0,db)[/tex], for the first map and
[tex](a,b)\mapsto(-da,db-ua)\mapsto da[/tex] and [tex](a,b)\mapsto
-a\mapsto da[/tex] for the second map show that all relevant squares
commute. Remember, thereby, that the differential on A[-1] gains a
sign, due to the shift.</div>
</div>
</div>
</div>
<div class="section" id="return-to-the-triangles">
<h3>Return to the triangles</h3>
<p>This last section gives the prototype for our exact triangles. Anything
isomorphic to this is exact. Suppose [tex]u\colon A\to B[/tex] is a
chain morphism. Then this fits into the triangle
[tex](u,v,\delta)[/tex] for [tex]v\colon b\mapsto (0,b)[/tex] and
[tex]\delta\colon(a,b)\mapsto-a[/tex]. Furthermore, since the mapping
cone of the identity map is split exact, this splitting map gives a null
homotopy of any map to that cone. So the cone is isomorphic to the zero
object in this category, and thus the identity map has the required
embedding. By defining exact triangles as all triangles isomorphic to a
triangle with a cone, it's clear that isomorphism of triangles preserves
exactness.</p>
<p>Rotation of triangles follows after working out that the maps appearing
when matching the rotation to what it should be are chain homotopy
equivalences. I'm not going to do this here.</p>
<p>The existence of morphisms follows since the construction of a mapping
cone is natural.</p>
<p>And right about now, I'm slowly growing bored by the verification of
axioms. The octahedral axiom seems (by a quick readthrough) to be a
diagram chase to find the right homotopies and the right maps to get to
the end.</p>
</div>
</div>
<div class="section" id="the-funky-stuff">
<h2>The funky stuff</h2>
<p>So, what are these things good for? The first meatier construction we'll
see is that of a long exact sequence in cohomology. For a triangulated
category <strong>K</strong> and an abelian category <em>A</em>, we say that an additive
functor H from <strong>K</strong> to <em>A</em> is a cohomological functor whenever all
exact triangles have long exact sequences in cohomology. The cohomology
functor is the canonical example.</p>
<p>Having this machinery, we will - in future posts - strive to construct a
<em>derived category</em>, in which the quasi-isomorphisms are inverted by a
localisation process; and thus the objects are isomorphic when they have
induced isomorphisms on cohomology. Work in the derived category is very
reminiscent of work with derived functors - with Ext and Tor.</p>
</div>
Group cohomology revisited: Quaternionic unit group2006-06-20T09:54:00+02:00Michitag:blog.mikael.johanssons.org,2006-06-20:group-cohomology-revisited-quaternionic-unit-group.html<p>In a <a class="reference external" href="http://blog.mikael.johanssons.org/archives/2006/06/more-group-cohomology/">previous
installment</a>,
we calculated [tex]H^*(D_8)[/tex] with some amount of success. For
that post, I said that I was going to calculate the cohomologies of
[tex]D_8[/tex] and of [tex]U(\mathbb H)[/tex] by hand - and I've been
at it for the latter group since then. With some help from my advisor -
mainly with executing the obvious algorithms far enough that I get
decent material to work with - I know have it.</p>
<div class="section" id="the-group">
<h2>The group</h2>
<p>So, for starters, we need a presentation of [tex]\mathbb H[/tex] such
that we can work well with it. We all know that [tex]\mathbb H=\langle
i,j,k|ij=-k, jk=-i, ki=-j, ijk=i^2=j^2=k^2=-1\rangle[/tex]. So due to
ij=-k and [tex]i^2=-1[/tex], we can just pick any two of the i,j,k and
call them x and y. Then [tex]x^4=e[/tex], [tex]y^2=x^2[/tex] and
iji=ik=j so xyx=y. This gives us the presentation [tex]\mathbb
H=\langle x,y|x^4,y^2=x^2,xyx=y\rangle[/tex]</p>
</div>
<div class="section" id="the-algebra">
<h2>The algebra</h2>
<div class="line-block">
<div class="line">With a presentation of the group, we need a good presentation of the
group algebra. We set X=x+1, Y=y+1. Then the relations turn into
[tex]X^4=0[/tex], [tex]X^2+Y^2=0[/tex] and if we want to express xyx+y
with X and Y, we can start by calculating XYX=xyx+xy+x^2+yx+y+1.
Adding to this XY, YX and [tex]X^2[/tex] we get
[tex]XYX+XY+YX+X^2=xyx+y[/tex]. With a term ordering that prefers Y to
X and works lexicographic within equal degree, we thus get the basis
[tex]X^4[/tex], [tex]Y^2+X^2[/tex], [tex]YX+XY+X^2+XYX[/tex]. It can
be verified, as well, that any word of length 5 vanishes under these
relations - and by reducing the XYX term in the last relation
repeatedly, we end up with the presentation</div>
<div class="line-block">
<div class="line">[tex]R=\mathbb F_2\mathbb H=\mathbb F_2\langle
X,Y\rangle/\langle X^4,Y^2+X^2,YX+XY+X^2+X^2Y+X^3+X^3Y\rangle[/tex]</div>
<div class="line">with a basis, as vector space, consisting of [tex]1, X, Y, XY, X^2,
X^2Y, X^3, X^3Y[/tex].</div>
</div>
</div>
</div>
<div class="section" id="the-resolution">
<h2>The resolution</h2>
<div class="line-block">
<div class="line">Calculating a resolution is, as always, a tedious matter of Gröbner
basis eliminations in order to find minimal kernel generators. The
first step is easy - the augmentation map has a kernel generated by X
and Y. So, the next step is to calculate the kernel of the map</div>
<div class="line-block">
<div class="line">[tex]\begin{diagram}R^2 &\rTo^{\begin{pmatrix}X &
Y\end{pmatrix}}& R\end{diagram}[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">This is, formulated as a Gröbner basis problem of modules, the same as
eliminating [tex]e_0 [/tex]from</div>
<div class="line-block">
<div class="line">[tex]e_0X+e_1[/tex]</div>
<div class="line">[tex]e_0Y+e_2[/tex]</div>
<div class="line">[tex]e_0X^4[/tex]</div>
<div class="line">[tex]e_0Y^2+e_0X^2[/tex]</div>
<div class="line">[tex]e_0(YX+XY+X^2+XYX)[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">The key ingredient here is discovering that of the three obvious
resulting relations</div>
<div class="line-block">
<div class="line">[tex]e_1X^3[/tex]</div>
<div class="line">[tex]e_1X+e_2Y[/tex]</div>
<div class="line">[tex]e_1(Y+X+YX)+e_2X[/tex]</div>
<div class="line">the first can be expressed in terms of the other two. So the actual
map of this step of the resolution ends up being</div>
<div class="line">[tex]\begin{diagram}R^2 &\rTo^{\begin{pmatrix}X &
Y+X+YX\\Y&X\end{pmatrix}}& R^2\end{diagram}[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">For the next step, we calculate an eliminating Gröbner basis of</div>
<div class="line-block">
<div class="line">[tex]e_1X+e_2Y+e_3[/tex]</div>
<div class="line">[tex]e_1(Y+X+YX)+e_2X+e_4[/tex]</div>
<div class="line">with the ring relations in mind.</div>
<div class="line">The additional relation [tex]e_2X^3Y+e_3X^3[/tex] is easy enough to
find, and then the a long and tedious calculation starting off with
multiplying the [tex]e_4[/tex]-relation with X and then eliminating
leading terms repetitively with the [tex]e_3[/tex]-relation , we end
up with</div>
<div class="line">[tex]e_2X^2Y+e_3(Y+X^2+X^2Y)+e_4X[/tex]</div>
<div class="line">This relation, multiplied with X and eliminated with the
[tex]e_2X^3Y+e_3X^3[/tex] gives us a first kernel element</div>
<div class="line">[tex]e_3(XY+X^2+X^2Y+X^3)+e_4X^2[/tex]</div>
<div class="line">If we just keep on going; multiplying instead the [tex]e_4[/tex]
relation with Y, and then eliminating everything in sight, we can in
the end find another kernel element:</div>
<div class="line">[tex]e_3(X+XY)+e_4(X+Y)[/tex]</div>
<div class="line">and by eliminating between these two kernel elements, we discover
that this second one is a Gröbner basis for the kernel on its own.</div>
</div>
</div>
<p>(note, for those non-familiar with this particular algorithm - the
surviving [tex]e_3[/tex] and [tex]e_4[/tex] terms end up being a kind
of log over which things in what order you may want to do to get rid of
everything with [tex]e_1[/tex] or [tex]e_2[/tex] in them...)</p>
<div class="line-block">
<div class="line">So we have our next map</div>
<div class="line">[tex]\begin{diagram}R&\rTo^{\begin{pmatrix}X+XY\\X+Y\end{pmatrix}}&R^2\end{diagram}[/tex]</div>
</div>
<div class="line-block">
<div class="line">Now, if we allow us a tiny bit of cheating, we can from Betti number
considerations (the resolution we're hunting has the Betti numbers
1,2,2,1,1,2,2,1,1,2,2,...) and dimensions conclude that our next
kernel should be one-dimensional. Thus, as soon as we have some
element of the kernel, we're done. A quick calculation reveals that
[tex]X^3Y[/tex] kills everything in sight, and thus more particularily
our map. So we get</div>
<div class="line">[tex]\begin{diagram}R&\rTo^{\begin{pmatrix}X^3Y\end{pmatrix}}&R\end{diagram}[/tex]</div>
</div>
<p>And finally, this kernel, of course, is generated by X and Y, so we're
back where we started in degree 5. This shows immediately that the
resolution has period 4; which will be of some assistance later on.</p>
<p>This, being boring and automatable and everything, is a good point to
moving forward.</p>
</div>
<div class="section" id="the-chain-maps-and-cocycles">
<h2>The chain maps and cocycles</h2>
<p>Since the resolution is minimal (trust me, it is), we can just throw out
cocycles at will. So we recognize that since the first map goes from an
R^2, we have two cocycles in degree 1. Let's call them [tex]\xi[/tex]
and [tex]\eta[/tex], and let [tex]\xi=\epsilon^1_1[/tex] and
[tex]\eta=\epsilon^1_2[/tex], where [tex]\epsilon^i_j[/tex] means
the augmentation map on the j:th component of the i:th degree module.</p>
<p>So, to calculate all possible products of degree 1 cocycles, we lift
[tex]\xi[/tex] and [tex]\eta[/tex] to chain maps. Or at least the
beginning of one.</p>
<div class="line-block">
<div class="line">So, take the cocycle [tex]a\xi+b\eta[/tex], with
[tex]a,b\in\{0,1\}[/tex]. Then the chain map lifting starts out</div>
<div class="line-block">
<div class="line">[tex]\begin{diagram}</div>
<div class="line">\rTo & R^2 & \rTo^{\begin{pmatrix}X &
Y+X+YX\\Y&X\end{pmatrix}}& R^2 & \rTo \\</div>
<div class="line">& \dTo^\psi && \dTo^{\begin{pmatrix}a & b\end{pmatrix}} & \rdTo
\\</div>
<div class="line">\rTo & R^2 & \rTo_{\begin{pmatrix}X&Y\end{pmatrix}} & R & \rTo
& \mathbb F_2 &\rTo& 0</div>
<div class="line">\end{diagram}[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">For the next function, we calculate the composition map in two ways.
Set
[tex]\psi=\begin{pmatrix}\alpha&\beta\\\gamma&\delta\end{pmatrix}[/tex].
Then</div>
<div class="line-block">
<div class="line">[tex]\partial\psi=\begin{pmatrix}X\alpha+Y\gamma & X\beta +
Y\delta\end{pmatrix}[/tex]</div>
<div class="line">and[tex]</div>
<div class="line">\begin{pmatrix}a & b\end{pmatrix}\partial=\begin{pmatrix}aX+bY &
aY+aX+aYX+bX\end{pmatrix}[/tex]</div>
<div class="line">Equality of these matrices gives us</div>
<div class="line">[tex]\psi=\begin{pmatrix}a & a+b\\b & a+aX\end{pmatrix}[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">Thus we have the maps</div>
<div class="line">[tex]\xi^2:(\alpha\quad\beta)\overset{\psi_\xi}{\mapsto}(\alpha+\beta\quad\beta+\betaX)\overset{\xi}{\mapsto}=\epsilon(\alpha+\beta)[/tex]</div>
<div class="line-block">
<div class="line">and thus</div>
<div class="line">[tex]\xi^2=\epsilon_1^2+\epsilon_2^2[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">[tex]\xi\eta=\eta\xi=(\alpha\quad\beta)\mapsto(\alpha+\beta\quad\beta+\betaX)\overset{\eta}{\mapsto}\epsilon(\beta)[/tex]</div>
<div class="line-block">
<div class="line">and thus</div>
<div class="line">[tex]\xi\eta=\eta\xi=\epsilon_2^2[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">[tex]\eta^2=(\alpha\quad\beta)\mapsto(\beta\quad\alpha)\overset{\eta}{\mapsto}\alpha[/tex]</div>
<div class="line-block">
<div class="line">and thus</div>
<div class="line">[tex]\eta^2=\epsilon_1^2[/tex]</div>
<div class="line">From this, we can read off a relation:
[tex]\xi^2=\xi\eta+\eta^2[/tex] or differently
[tex]\xi^2+\xi\eta+\eta^2[/tex]. Letting [tex]\xi[/tex] be more
significant than [tex]\eta[/tex], we now know that the only
interesting cocycles in higher degrees will be [tex]\xi\eta^n[/tex]
and [tex]\eta^n[/tex].</div>
</div>
</div>
<p>Thus, in degree 3, we know we have only one cocycle, and we can expect
to find [tex]\eta^3[/tex] and [tex]\xi\eta^2[/tex] there. So, let's
lift [tex]\eta^2[/tex] and [tex]\xi\eta+\eta^2[/tex] to chain maps,
and then see what happens when we compose this with [tex]\xi[/tex] and
with [tex]\eta[/tex].</p>
<div class="line-block">
<div class="line">So we start with the diagram</div>
<div class="line-block">
<div class="line">[tex]\begin{diagram}</div>
<div class="line">\rTo & R & \rTo^{\begin{pmatrix}X+Y+XY\\X+Y\end{pmatrix}} & R^2
& \rTo \\</div>
<div class="line">& \dTo^\psi && \dTo^{\begin{pmatrix}a&b\end{pmatrix}} &
\rdTo\\</div>
<div class="line">\rTo & R^2 & \rTo_{\begin{pmatrix}X&Y\end{pmatrix}}&
R&\rTo&\mathbb F_2&\rTo&0</div>
<div class="line">\end{diagram}[/tex]</div>
</div>
</div>
<p>We set [tex]\psi=\begin{pmatrix}\alpha\\\beta\end{pmatrix}[/tex]</p>
<div class="line-block">
<div class="line">Now, the two compositions end up being</div>
<div class="line">[tex]\begin{pmatrix}a&b\end{pmatrix}\partial=\begin{pmatrix}a(X+Y+XY)+b(X+Y)\end{pmatrix}[/tex]</div>
<div class="line">[tex]\partial\psi=\begin{pmatrix}X\alpha+Y\beta\end{pmatrix}[/tex]</div>
<div class="line-block">
<div class="line">and from this we can read off</div>
<div class="line">[tex]\alpha=a(1+Y)+b[/tex] and [tex]\beta=a+b[/tex]</div>
</div>
</div>
<div class="line-block">
<div class="line">Thus, we get the relation ([tex]\eta^2+\xi\eta: (a\quad
b)=(1\quad 1)[/tex])</div>
<div class="line-block">
<div class="line">[tex]\xi^2\eta+\xi\eta^2:
a\mapsto\begin{pmatrix}a+aY+a\\a+a\end{pmatrix}\mapsto 0[/tex]</div>
<div class="line">So [tex]\xi\eta^2=\xi^2\eta[/tex].</div>
<div class="line">But then [tex]\xi^2\eta=\xi\eta^2+\eta^3[/tex], since
[tex]\xi^2=\xi\eta+\eta^2[/tex]; so
[tex]\xi\eta^2=\xi^2\eta=\xi\eta^2+\eta^3[/tex]. Thus
[tex]\eta^3=0[/tex]. Now, the only relevant basis elements in degree
4 are, due to the relation [tex]\xi^2=\xi\eta+\eta^2[/tex],
[tex]\xi\eta^3[/tex] and [tex]\eta^4[/tex]. Since
[tex]\eta^3=0[/tex], both of these vanish, and so the single basis
element in degree 4 has to be a new algebra generator:
[tex]\zeta=\epsilon^4_1[/tex].</div>
</div>
</div>
<div class="line-block">
<div class="line">Now, due to the periodicity and the degrees of the generators, this is
all we need. The homogenous components repeat with a period of four,
and the generators for each slice will be</div>
<div class="line-block">
<div class="line">[tex]\zeta^n[/tex]</div>
<div class="line">[tex]\xi\zeta^n, \eta\zeta^n[/tex]</div>
<div class="line">[tex]\xi\eta\zeta^n, \eta^2\zeta^n[/tex]</div>
<div class="line">[tex]\xi\eta^2\zeta^n[/tex]</div>
<div class="line">[tex]\zeta^{n+1}[/tex]</div>
<div class="line">...</div>
</div>
</div>
<div class="line-block">
<div class="line">Thus, we have the group cohomology</div>
<div class="line-block">
<div class="line">[tex]H^*(U(\mathbb H),\mathbb
F_2)=k[\xi^1,\eta^1,\zeta^4]/\langle
\xi^2+\xi\eta+\eta^2,\xi\eta^2+\xi^2\eta\rangle[/tex]</div>
</div>
</div>
</div>
Admin: Wordpress 2.02006-06-19T15:13:00+02:00Michitag:blog.mikael.johanssons.org,2006-06-19:admin-wordpress-20.html<p>This blog just migrated to WP 2.0. Should anything be odd, please notify
me.</p>
<p>The migration was to a large portion motivated by commenting problems
I've been told about. I hope that my readers out there will be able to
comment now; possibly even without logging in!</p>
Weekly Report: Power tourism2006-06-18T11:15:00+02:00Michitag:blog.mikael.johanssons.org,2006-06-18:weekly-report-power-tourism.html<p>It's been a while since I managed to write one of these. The reason is
simple enough - my weekends have been packed; and I don't get around to
it during the weeks.</p>
<p>During the last three weekends, first my parents and my brother, and
then for the last two and the week inbetween my fiancée, have been
visiting me in Jena. Thus, I have covered more ground in these three
weeks when it comes to tourism than I probably will be able to do in
several months. I have seen the Blue Man Group in Berlin (WOW!), I have
seen the Dornburg, the Feengrotten and Weimar. I have eaten at the
expensive luxurious restaurant at the top of the old university tower in
Jena (it's bloody scary, but quite cool - the restaurant is on the 29th
floor; in a city where only one single house goes above 10 floors).</p>
<p>The future is taking a more determined shape as well: I'm going to
Sweden for a two-week vacation in the beginning of August, and then I'm
going to Leeds for a workshop on triangulated categories. The Leeds-trip
is paid for by the university, since my advisor still thinks that it's
bloody important that I end up being fluent in the categorifications of
homological algebra; and because the participant list was both cool and
good to get connections in. I will be meeting a few old friends there,
as well as quite a few people I've only read about (Avramov - the
'cause' of my master's thesis is coming!) . All in all, I'm very much
looking forward to this. It'll be a blast.</p>
<p>It's still not very clear what I'll actually be doing for my PhD. So
far, reading up on the bloody stuff and get a solid grip on it is work
enough, so to speak. But I may end up trying to find a result along the
lines of "For almost all groups [tex]G[/tex], the depth of
[tex]H^*(G,\mathbb F_p)[/tex] is equal to the p-rank of
[tex]Z(G)[/tex]" for some sensible meaning to 'almost all'. Another
possible direction was suggested to me at a department dinner last
tuesday: The algorithms in place for efficient calculation of modular
group cohomology can probably be "lifted" to algorithms to work with
path algebras; and this may actually end up being relevant for some
ring-theoretical statements. This may also be a good direction to go off
to - I don't know.</p>
<p>For a completely different tune: John Baez has an absolutely <a class="reference external" href="http://math.ucr.edu/home/baez/week234.html">marvelous
last week's find</a>. He
talks about mathematics and music; more specifically about the
application of group theory to modern (i.e. post-tonal) music theory.
The idea is that tunes live in [tex]\mathbb Z/12\mathbb Z[/tex]; and
more specifically chords live in [tex](\mathbb Z/12\mathbb Z)^3[/tex].
So for instance C corresponds to 0, C# to 1 and D to 2. C-major is the
triple (0,4,7) and C-minor is (0,3,7). Thus, we have an action of the
dihedral group [tex]D_{12}[/tex] with 24 elements on chords; generated
by the transposition [tex](a,b,c)\mapsto(a+1,b+1,c+1)[/tex] and the
reflection [tex](a,b,c)\mapsto (c,b,a)[/tex]. Transposition preserves
major/minor chords; whereas reflection swaps them. And then funky stuff
starts happening when you start considering whether there are any other
interesting group actions of 24-element groups on chords - turns out
that there is one: the so-called PLR-group. Go read it!</p>
More group cohomology2006-06-08T22:10:00+02:00Michitag:blog.mikael.johanssons.org,2006-06-08:more-group-cohomology.html<p>My advisor told me to go hit [tex]D_8[/tex] and [tex]U(\mathbb
H)[/tex] as my next two cohomology calculation projects; try to do them
with resolutions by hand so that I get a feeling for what's going on.
After failing spectacularily both at getting a resolution of
[tex]\mathbb F_2[/tex] with [tex]\mathbb F_2D_8[/tex], he walked me
through his Shiny! Gr</p>
First academic publication!2006-06-06T15:03:00+02:00Michitag:blog.mikael.johanssons.org,2006-06-06:first-academic-publication.html<p>I just received in the mail a bunch of prints. Of my article
"Computation of Poincaré-Betti series for monomial rings", produced from
my Master's thesis for the "School and workshop on computational algebra
for algebraic geometry and statistics" in Torino 2004. It is now being
published in the Rendiconti di Istituto Matematico di Universita di
Trieste, on pages 85-94 of Vol. XXXVII (2005).</p>
<p>Damn, it feels good. Reviewed and everything. If you're curious, my
manuscript can be found at <a class="reference external" href="http://math.su.se/~mik/torino.pdf">http://math.su.se/~mik/torino.pdf</a> or at the
arXiv as <a class="reference external" href="http://arxiv.org/abs/math.AC/0502348">math.AC/0502348</a>.</p>
My first group cohomology - did I screw up?2006-05-21T13:42:00+02:00Michitag:blog.mikael.johanssons.org,2006-05-21:my-first-group-cohomology-did-i-screw-up.html<p>I thought the seminar on tuesday would possibly benefit from something
not very often seen - explicit examples. So I started working through
one. I wanted to calculate [tex]H^*(C_8,\mathbb F_2)[/tex] and give
explicitly in a series of ways the product structure - as Yoneda
splices, as chain map compositions and as cup products.</p>
<p>Now, [tex]\mathbb F_2[/tex] has a very nice resolution as a
[tex]\mathbb F_2C_8[/tex]-module - all cyclic (finite) groups have
canonically a really cute minimal resolution - given by</p>
<p>[tex]\to \mathbb F_2C_8\to\mathbb F_2C_8\to\mathbb
F_2C_8\to\mathbb F_2\to 0[/tex]</p>
<p>with the last map taking [tex]\epsilon\colon
a_0+a_1g+a_2g^2+\dots+a_7g^7\mapsto a_0+a_1+\dots+a_7[/tex]
and the other maps alternatingly being multiplication with
[tex]\partial_1\colon 1+g+g^2+\dots+g^7[/tex] and with
[tex]\partial_0\colon g-1[/tex].</p>
<p>So this gives as a nice projective (in fact: free) resolution to work
with. We now can observe that [tex]\Hom_{\mathbb F_2C_8}(\mathbb
F_2C_8,\mathbb F_2)=\Hom_{\mathbb F_2C_8}(\mathbb
F_2,\mathbb F_2)=\mathbb F_2[/tex] since any map has to respect the
group action, which is trivial on [tex]\mathbb F_2[/tex], and so any
map is determined by its value on 1. Thus we get the sequence of dual
modules</p>
<p>[tex]0\to\mathbb F_2\to\mathbb F_2\to\mathbb
F_2\to\dots[/tex]</p>
<p>with the codifferentials given by the identity in the first step, since
if [tex]f:\mathbb F_2\to\mathbb F_2[/tex], then
[tex]\epsilon^*f(1)=f\epsilon(1)=f(1)[/tex] and zero everywhere else,
since
[tex]\partial_0^*f(1)=f\partial_0(1)=f(g-1)=gf(1)-f(1)=f(1)-f(1)=0[/tex]
and
[tex]\partial_1^*f(1)=f\partial_1(1)=f(1+g+\dots+g^7)=8f(1)[/tex]
since [tex]C_8[/tex] acts trivially on [tex]\mathbb F_2[/tex].</p>
<p>So we have that [tex]H^0=\ker\mathbb 1=0[/tex] and [tex]H^n=\ker
0/\operator{im} 0=k[/tex] for all other n.</p>
<p>Now for the product structure. We have [tex]H^*=\bigoplus_{i\geq 1}
\mathbb F_2C_8x_i[/tex] as a module; now we need to define the
product of two cochains. It is enough to look at what happens on the
generators; thus taking [tex]x_ix_j[/tex]. Using the cochain map
model, we need to lift the map that takes [tex]g^i\mapsto 1[/tex] to a
chain map, run it i-j steps up and then compose it with the map that
takes 1 to 1.</p>
<p>There are precisely two possible cochains we could want to lift - either
the 0-map, or the augmentation map [tex]\epsilon[/tex]. Furthermore,
there are precisely two different cases for these maps - either i is odd
or i is even. For the 0-map, it doesn't matter what i is; the lift is
the 0-map. For the augmentation map, we actually get different results.</p>
<div class="line-block">
<div class="line">Firstly, for even i</div>
<div class="line-block">
<div class="line">[tex]\begin{diagram}</div>
<div class="line">\mathbb F_2C_8 &\rTo^{1+g+\dots+g^7} & \mathbb F_2C_8
&\rTo^{g-1}&\mathbb F_2C_8 &\rTo^{1+g+\dots+g^7}&\dots\\
\dTo^{f_2} && \dTo^{f_1} && \dTo^{f_0} & \rdTo^\epsilon \\
\mathbb F_2C_8 &\rTo^{1+g+\dots+g^7}&</div>
<div class="line">\mathbb F_2C_8 &\rTo^{g-1}& \mathbb F_2C_8 &\rTo^\epsilon
&\mathbb F_2&\rTo&0\end{diagram}[/tex]</div>
<div class="line">so we can take [tex]f_0=\mathbb 1[/tex] and [tex]f_1=\mathbb
1[/tex], and in fact lift to the identity chainmap.</div>
</div>
</div>
<div class="line-block">
<div class="line">For odd i, we instead get</div>
<div class="line-block">
<div class="line">[tex]\begin{diagram}</div>
<div class="line">\mathbb F_2C_8 &\rTo^{g-1}& \mathbb F_2C_8
&\rTo^{1+g+\dots+g^7} & \mathbb F_2C_8 &\rTo^{g-1}&\dots\\
\dTo^{f_2} && \dTo^{f_1} && \dTo^{f_0} & \rdTo^\epsilon \\
\mathbb F_2C_8 &\rTo^{1+g+\dots+g^7}&</div>
<div class="line">\mathbb F_2C_8 &\rTo^{g-1}& \mathbb F_2C_8 &\rTo^\epsilon
&\mathbb F_2&\rTo&0\end{diagram}[/tex]</div>
<div class="line">we can set [tex]f_0=\mathbb 1[/tex] and thus find the condition on
[tex]f_1[/tex] that [tex](g-1)f_1(m)=(1+g+\dots+g^7)m[/tex]. One
map fulfilling this is multiplication by [tex]1+g^2+g^4+g^6[/tex]
since then
[tex](g-1)(1+g^2+g^4+g^6)m=(g+g^3+g^5+g^7+1+g^2+g^4+g^6)m[/tex].</div>
</div>
</div>
<p>Thus for our basis map products [tex]x_ix_j[/tex] we get that if
either [tex]x_i[/tex] or [tex]x_j[/tex] are the 0-cochain, then the
product is 0 (no surprise there!) and if both are even, then
[tex]x_ix_j=x_{i+j}[/tex]. If [tex]x_i[/tex] is odd, then we compose
the relevant cochain for [tex]x_j[/tex] with the chain map in the
second expose above - so we have two different cases to distinguish. If
[tex]x_j[/tex] has odd degree, then the product is the representative
of the map
[tex]m\mapsto\epsilon((1+g^2+g^4+g^6)m)=4\epsilon(m)=0\pmod 2[/tex].
If, however, [tex]x_j[/tex] has even degree, then we get the
augmentation map; and thus [tex]x_ix_j=x_{i+j}[/tex].</p>
<p>Thus the product structure on [tex]H^*(C_8,\mathbb F_2)[/tex]
annihilates products of odd degree elements, and in all other cases
sends [tex]x_ix_j=x_{i+j}[/tex].</p>
<p>This proves that [tex]H^*(C_8,\mathbb F_2)=\mathbb
F_2[x^2,\xi]/\langle \xi^2\rangle[/tex], which is exactly what
Carlsen et.al. give.</p>
Weekly Report: From 0 to 100 in 6 seconds flat!2006-05-20T14:36:00+02:00Michitag:blog.mikael.johanssons.org,2006-05-20:weekly-report-from-0-to-100-in-6-seconds-flat.html<p>This is the second weekend in a row spent to more or less large part in
the office, working with the product structures on cohomology. Reason
for this is that I'm getting my share of the department seminars now -
I'm to walk us through the Yoneda cohomology product; the
cohomology-as-Hom-in-a-derived-category viewpoint; their equivalence to
one another as well as to the cup product; and then talk about
restriction and corestriction (i.e. what happens to cohomology when we
go between kG- and kH-modules for H a subgroup of G)</p>
<p>This is all not really very bad - I really, really, REALLY need to get a
solid grip of this myself too. Only; when I started working on it, I
thought I had 4 days and not 11 to prepare in - and dove right into it.
Maybe a bit too deeply, so when I (monday) found out I didn't need to
get it done THAT quickly, I kinda dropped most of it for the rest of the
week. And now, I need to find a decent proof that cup products = Yoneda
products. And I just realized that my books don't really cover it.</p>
<p>In other news, the apartment I looked at last week now is mine. The
lease is signed. My stuff is in it (though not really where it should be
- I have a week or two of unpacking to do!!) and I spent a major part of
today running around Jena, getting hold of essentials - such as a duvet
cover (I didn't discover the lack of it until I went to bed yesterday)
and looking at a new bed (the one that's there already is MURDER on my
back!).</p>
<p>I also need to get hold of a rather highpowered WLAN card and an
external directional antenna. My boss's laptop has good reception, my
USB stick with a homebuilt cantenna around it has lousy reception, and
the stick on its own has no reception right now. Anyone have any
experience with such things? I'd love advice!</p>
Weekly Report: Mathematics makes my ears bleed2006-05-14T13:50:00+02:00Michitag:blog.mikael.johanssons.org,2006-05-14:weekly-report-mathematics-makes-my-ears-bleed.html<p>I'm back in Jena now. The last week was spent working myself to the
brink of unconsciousness trying to grasp homotopical algebra, simplicial
objects, model categories and any and all things Alex sent my way. With
some 6-8 hours each day spent on lectures and discussions explicitly
held to enable me to understand what was going on, I ended up being
halfdead from the mental exercise.</p>
<p>In addition, since I was back in Sweden, outside lecture times was spent
almost exclusively socializing in one way or another. Meeting friends.
Shopping. Watching movies and spex. And then top it off with an
endlessly long trip back.</p>
<p>But that all has a slight feel of old cake to it right now. In the three
days since I got back, I held an examples class (including some spots
that I didn't master myself when working through them.... eeeek!), I
looked at two apartments, I decided to rent one of them, I took
responsibility for the presentation for the department seminar of the
Yoneda splice product and the equivalence between Yoneda splices and the
cup product on cohomology, I worked myself crazy on the presentation, I
met with my new, nice and shiny Ars Magica group, went to the movies
(Ice Age 2. Good - even in German), shopped and got myself a new
insurance.</p>
<p>The new apartment (signing the contract next week) is gorgeous. It's a
small studio apartment literally two blocks away from my office. It's
prefurnished, with everything I'd need and then some; it's aDSL-capable,
it's large for its size, and I have only businesses as neighbours (yay!
loud music!). I'm happy!</p>
<div class="line-block">
<div class="line"><img alt="My office (blue) and my new apartment (purple) in Jena" src="http://mikael.johanssons.org/WohnungOffice.png" /></div>
<div class="line-block">
<div class="line">The blue dot is the location of my office; and the purple dot is
where my new apartment is at.</div>
</div>
</div>
Weekly Report: Preparing to leave2006-05-07T17:11:00+02:00Michitag:blog.mikael.johanssons.org,2006-05-07:weekly-report-preparing-to-leave.html<p>I left Jena going to Stockholm on Saturday. Thus, much of the week past
has been spent in preparation for the trip - reading up on homotopical
algebra; getting all the paperwork together and getting my things
together for the trip.</p>
<p>Along with "Make sure you learn homotopical algebra" and "Get back
primed and ready to teach when you come", I also was instructed by my
advisor to get in touch with $MATHEMATICIAN, who currently resides at
Mittag-Leffler and whom he knew from earlier. He is, I am told, very
good, very knowledgeable and definitely a resource to be tapped if I
should have any chance of it whatsoever.</p>
<p>I desperately, sincerely need to get a better cheap travel route to
Jena. This trip now took me €150-170 somewhere, but had a travel time of
more than 13 hours. There has GOT to be a better way to go.</p>
Weekly Report: Forgot all about it!2006-05-04T14:58:00+02:00Michitag:blog.mikael.johanssons.org,2006-05-04:weekly-report-forgot-all-about-it.html<p>Right. It's thursday. And I had some sort of hopes to do my weekly
reports on saturdays. Only, last saturday found me back in Nuremburg, in
the middle of a marathon party-after-party session with the RPG crowd
there.</p>
<p>Last week was very much characterized by getting various conditions for
my being allowed to go to Sweden next week, and getting various bits and
pieces of general paperwork in order. In addition to that, I held my
first lesson - an examples class in Algebra. Right now, we're doing
modules: general definitions and then the structure theorem for finitely
generated modules over a PID. I have already noticed for my self what
has been painfully obvious from observing bloggers and friends whining
about their students - it's obvious as soon as you set foot in the
classroom who knows what's going on; and these are the only ones who
will give you any sort of life indication. I already started despairing
about getting reactions from the people not running up to speed -
especially since these also don't bother handing in any kind of work for
me to correct either. End result: I have no idea if I'm doing any good
for those who need me, and only get responses from those who don't. The
teacher's lament.</p>
<p>I also hooked up with the local student group working with popularizing
of mathematics: <a class="reference external" href="http://www.wurzel.org">Die Wurzel</a>. They publish a
monthly magazine about mathematics, arrange math camps and maths
competitions; and in addition seem to have a bloody good time themselves
meanwhile. I signed up without blinking - and will be working on my
first article for them as soon as I get time enough to breathe.</p>
<p>I will be back in Stockholm saturday-thursday. See some of you then.</p>
Reading Merkulov: Differential geometry for an algebraist (4 in a series)2006-04-24T14:52:00+02:00Michitag:blog.mikael.johanssons.org,2006-04-24:reading-merkulov-differential-geometry-for-an-algebraist-4-in-a-series.html<p>Suppose we have a presheaf [tex]\mathcal F[/tex] of abelian groups over
[tex]M[/tex] and pick a point [tex]x[/tex]. On the collection of all
abelian groups defined over some neighbourhood of [tex]x[/tex] (disjoint
union) we put an equivalence relation which identifies
[tex]f\in\mathcal F(U)[/tex] and [tex]g\in\mathcal F(V)[/tex]
precisely if there is some open [tex]W[/tex] in the intersection where
[tex]f[/tex] and [tex]g[/tex] coincide. (or more precisely, their
restrictions coincide). The set of equivalence classes turns out to be
an Abelian group [tex]\mathcal F_x[/tex] called the <em>stalk</em> of the
presheaf [tex]\mathcal F[/tex] at [tex]x[/tex].</p>
<p>So, with more fluff introduced, the stalk is all the elements in the
presheaf that are defined above any neighbourhood of the point, and
counted as the same if they seem to be.</p>
<p>For an open set [tex]U[/tex] and a point [tex]x\in U[/tex] there is a
canonical group morphism [tex]\rho_x:\mathcal F(U)\to\mathcal
F_x[/tex] which sends an element [tex]f\in\mathcal F(U)[/tex] to its
equivalence class. This image is the germ of [tex]f[/tex] at
[tex]x[/tex].</p>
<div class="section" id="example">
<h2>Example</h2>
<p>Let [tex]\mathcal O[/tex] be the presheaf of holomorphic function on
[tex]\mathbb C[/tex]. Then [tex]\mathcal O_{z_0}[/tex] is precisely
the set of all convergent power series of the form
[tex]\sum_{n=0}^\infty c_n(z-z_0)^n[/tex] for complex
[tex]c_n[/tex]. The germ of some [tex]f[/tex] defined on a
neighbourhood of [tex]z_0[/tex] is precisely the Taylor series around
that point.</p>
</div>
<div class="section" id="id1">
<h2>Example</h2>
<p>Take [tex]\mathbb Z[/tex] the constant presheaf where over each open
set, the additive group of all integers hovers. Then we have for the
construction of the stalk a disjoint union of copies of [tex]\mathbb
Z[/tex] indiced by all the neighbourhoods of the point. Since all the
neighbourhoods contain [tex]x[/tex] they all have a non-empty
intersection, on which elements agree if they have the same value. So
[tex]\mathcal F_x=\mathbb Z[/tex] for any point [tex]x[/tex]. The
germ of [tex]i\in\mathbb Z=\mathcal F(U)[/tex] at [tex]x[/tex] is
[tex]i\in\mathbb Z=\mathcal F_x[/tex].</p>
<p>We define exact sequences by looking at the induced maps of the stalks.
A sequence of sheave morphisms is exact if for every point on the
underlying space, the corresponding induced stalk map sequences are
exact.</p>
</div>
<div class="section" id="id2">
<h2>Example</h2>
<p>We can construct a morphism of sheaves of abelian groups from
[tex]\mathbb C[/tex] to [tex]\mathcal O[/tex] by sending
[tex]c\in\mathbb C(U)[/tex] to the function [tex]f(z)=c[/tex] in
[tex]\mathcal O(U)[/tex]. For some point [tex]z_0[/tex], this induces
a stalk map that takes [tex]c[/tex] in [tex]\mathbb C_{z_0}[/tex] and
sends it to the function defined by the convergent power series
[tex]c(z-z_0)^0=c[/tex]. Regardless of [tex]z_0[/tex], this map is
obviously an injection, and so this map is a monomorphism.</p>
<div class="section" id="space-etale-and-sheafifications">
<h3>Space étalé and sheafifications</h3>
<p>Given a presheaf [tex]\mathcal F[/tex], we construct the set
[tex]|\mathcal F|[/tex] as the disjoint union of all stalks
[tex]\mathcal F_x[/tex]. There's an natural projection
[tex]\pi:f_x\mapsto x[/tex] down to the underlying topological space.
We can introduce a topology on [tex]|\mathcal F|[/tex] by
constructing, for each open set [tex]U\subseteq M[/tex] and each
element [tex]f\in\mathcal F(U)[/tex], the set
[tex][U,f]=\{\rho_x(f)\mid x\inU\}\subseteq|\mathcal F|[/tex]
and take these sets to be a basis of our topology. So an open set is
given from the [tex][U,f][/tex] by a sequence of (possibly infinite)
unions and finite intersections.</p>
<p>This turns out to be a covering of [tex]\mathcal F[/tex], i.e. each
[tex]e\in|\mathcal F|[/tex] has an open neighbourhood which is
homeomorphic to its image under the projection [tex]\pi[/tex]. We call
the topological space [tex]|\mathcal F|[/tex] with this topology the
space étalé of the presheaf [tex]\mathcal F[/tex].</p>
<p>Using the space étalé, then, we can construct a canonical sheaf. A
continuous section of a covering space [tex]\pi\colon\mathcal F\to
M[/tex] over a subset [tex]U\subseteq M[/tex] is a continuous map
[tex]\sigma\colon U\to|\mathcal F|[/tex] such that
[tex]\pi\circ\sigma=\mathbb 1[/tex].</p>
</div>
</div>
<div class="section" id="id3">
<h2>Example</h2>
<p>[tex]\mathbb R[/tex] is a covering space of the circle (viewed as the
interval [0,1] with 0 identified with 1) with the projection
[tex]x\mapsto x\pmod1[/tex]. A continuous section of the upper open
halfcircle is a map [tex]\sigma_n\colon x\mapsto x+n[/tex]. Indeed,
[tex]\pi\circ\sigma_n[/tex] takes some point [tex]x[/tex] to
[tex]x+n[/tex] and then to [tex]x+n\pmod1[/tex]</p>
<p>Now, let [tex]\Gamma(U,|\mathcal F|)[/tex] denote the set of all
continuous sections of [tex]|\mathcal F|[/tex] over [tex]U[/tex].
This ends up in the same category that [tex]\mathcal F[/tex] goes to.
From this, we then define a sheaf [tex]\hat{\mathcal F}[/tex] by
setting [tex]\hat{\mathcal F}(U)=\Gamma(U,|\mathcal F|)[/tex], and
letting restriction be the usual restriction of maps. This gives us a
functor from presheaves to sheaves called <em>sheafification</em>.</p>
</div>
<div class="section" id="id4">
<h2>Example</h2>
<p>We already saw that [tex]\mathbb Z_{\text{const}}[/tex] assigning the
group [tex]\mathbb Z[/tex] to each open subset [tex]U[/tex] of a
manifold [tex]M[/tex] and with all restrictions being identity morphisms
is not a sheaf. What happens if we sheafify? First, we need to construct
our space étalé. This is the disjoint union of all stalks. A stalk over
a point [tex]x[/tex] is the quotient of the disjoint union of all
[tex]\mathbb Z_{\text{const}}(U)[/tex] for [tex]x\in U[/tex] with
the equivalence relation that identifies [tex](n,U)\equiv(m,V)[/tex] if
there is some [tex]W\subset U\cap V[/tex] such that
[tex]n|_W=m|_W[/tex]. Now, all neighbourhoods of [tex]x[/tex]
intersect in some open subset, and since all restrictions are
identities, we identify [tex](n,U)[/tex] with [tex](n,V)[/tex] for all
pairs of open neighbourhoods [tex]U[/tex],[tex]V[/tex]; so in end-effect
all stalks are isomorphic, as groups, to [tex]\mathbb Z[/tex].</p>
<p>The space étalé is thus the disjoint union for all points [tex]x\in
M[/tex] of copies of [tex]\mathbb Z[/tex]; i.e. the set of ordered
pairs on the form [tex](x,n)[/tex] for [tex]x\in M[/tex] and
[tex]n\in\mathbb Z[/tex]. Finally, our sheafified sheaf assigns to
each open subset [tex]U\subseteq M[/tex] the group of sections
[tex]\Gamma(U,|\mathbb Z_{\text{const}}|)[/tex], thus a point
there is a function [tex]U\to U\times\mathbb Z[/tex] such that
[tex]x\mapsto n_x[/tex] for some [tex]n_x\in\mathbb Z[/tex]. Since
this map has to be continuous, the map is constant on neighbourhoods.
And thus we recover the sheaf of locally constant maps for the constant
sheaf; just as exhibited earlier.</p>
<div class="section" id="kernels-and-quotients">
<h3>Kernels and quotients</h3>
<p>Let [tex]\tau:\mathcal A\to\mathcal B[/tex] be a map of sheaves. For
any open [tex]U\subseteq M[/tex], we define [tex]\mathcal
K(U)=\ker\tau_U\colon\mathcal A(U)\to\mathcal B(U)[/tex]. The
sheaf [tex]\mathcal K[/tex] formed by these groups together with the
induced morphisms from is called the kernel of the sheaf map.</p>
<p>The quotient of a sheaf map is formed almost the same way - pointwise
components are formed as expected, but the presheaf thus formed need not
be a sheaf. So we sheafify it.</p>
</div>
</div>
Weekly report: Settling in2006-04-22T14:21:00+02:00Michitag:blog.mikael.johanssons.org,2006-04-22:weekly-report-settling-in.html<p>My first week has passed. Today is saturday; and the move took place
monday. So far, I've been running around doing bureaucracy and little
else (I managed to leaf through the first 5 pages of Evens: Cohomology
of Groups). Along the lines - I've received a summons to appear in front
of the immigration authorities to explain my moving in, I've ran circles
around the city trying to get someone to approve my swedish birth
certificate et.c.</p>
<p>My apartment is small, neat and nice. It's some 4x5 meters, with bed,
bookcase, two tables, wardrobe, kitchenette, toilet with bath, balcony.
And then all the things I brought with me - including a bookcase, three
tables, computer, books-books-books, and much much more. I've gotten
around to some interior decoration as well - putting up my swedish and
my franconian flags on a wall. The endeffect is pretty - although I
periodically have to remind myself that my putting up a swedish flag is
no longer a sign of right-extremism but rather a sign of keeping in
touch with home.</p>
<p>Mathematically, I'm slowly getting to grips with the concept of the
cohomology of a group. Basically, I'm back to studying Ext_R(k,k) for a
specific k-algebra R - which in this case happens to be the group ring
kG. As such, it's more or less the same generic area as during my
master's thesis. I am more and more warming up to the idea of involving
as much homotopy theory as possible; I know that finding contracting
homotopies (i.e. a chain homotopy between the chain maps 0 and Id on a
given chain complex) makes verification of resolutions simple; and if
Morse theory can be involved to condense the complexes to something of
manageable size, then the computational barriers we're facing'd be more
easily treated as well. In end-effect, this all boils down to me wanting
to learn more about algebraic homotopy theory / homotopical algebra; and
I'll probably start by studying model categories from Quillen, Dwyer &
Spalinski and from Hovey.</p>
<p>At some point in the future, I will get my internet access @home up and
running. And then woe upon the mean ugly hordies, I say!</p>
Question for the mathematical audience2006-04-20T15:44:00+02:00Michitag:blog.mikael.johanssons.org,2006-04-20:question-for-the-mathematical-audience.html<p>I have now been staring at this particular sentence for way too long,
and thus will start using any and all communication lines I can find to
get assistance. Either I'm being way too stupid, or the author neglects
to mention some salient detail.</p>
<p>Setup: [tex]\phi\colon G^\prime\to G[/tex] is a group homomorphism,
[tex]A\in kG-\operator{mod}[/tex], [tex]A^\prime\in
kG^\prime-\operator{mod}[/tex]. [tex]A[/tex] can be given the
structure of a [tex]kG^\prime[/tex]-module by pulling back through
[tex]\phi[/tex], i.e. we define [tex]g^\prime
a:=\phi(g^\prime)a[/tex] for [tex]g^\prime\in G^\prime[/tex] and
[tex]a\in A[/tex].</p>
<p>So far it's all crystal clear for me. However, it then turns out that
we're highly interested in using a morphism
[tex]f\in\operator{Hom}_G(A,A^\prime)[/tex] and I cannot for the
life of me find out how such beasts are guaranteed to exist. If it where
[tex]f\in\operator{Hom}_{G^\prime}(A,A^\prime)[/tex], I wouldn't
have any problems with it; but then the stuff I need/want to do with it
don't work out.</p>
Weekly posts - a new feature at this blog2006-04-15T19:27:00+02:00Michitag:blog.mikael.johanssons.org,2006-04-15:weekly-posts-a-new-feature-at-this-blog.html<p>Now that my blog returns to its status of a PhDiary as I actually got a
PhD position, I will introduce one flavour of regular postings. Instead
of keeping in touch with people by mailing lists, livejournal, and
everywhere else, there will be weekly postings here about life as a
German PhD student.</p>
<p>So far, my entrance into German academics has had one feature above all
else. Bureaucracy.</p>
<p>In order to even look at my contract, I needed to go, specifically
therefore, to Jena, to fill out a questionnaire. This questionnaire is
geared towards ascertaining that I am a good representant of the German
state and its ideals. So, there are questions upon questions upon
questions about my involvement with Stasi, my involvment with former
DDR, whether I went to party schools, whether I've held party offices,
et.c. et.c. Not to mention the centimeter-high stack of papers I got
home to fill out on my own. With complete curriculum vitae from the age
of 14. And Gods only know how many different obscure decisions to be
made and forms to be filled in.</p>
<p>I still have a few open slots, but hopefully I can get help filling
those in when I'm in Jena. I'm planning on ambushing the department
secretary.</p>
<p>I have actually managed to read up a little bit on mathematics too - I
still have a rather loose grasp of it; but it seems that the object of
study for my next three years will be [tex]\operator{Ext}^\mathbb
Z_pG_*(\mathbb Z_p,\mathbb Z_p)[/tex] and various aspects of it.
There will be more about this here on the blog in the future - I can
promise that.</p>
Why I keep organizing congresses2006-04-05T09:07:00+02:00Michitag:blog.mikael.johanssons.org,2006-04-05:why-i-keep-organizing-congresses.html<p>Once upon a time, I wasn't passionate about mathematics. Up to grade 6,
I even disliked it quite a bit - it consisted of only mechanical
plugging away of numbers, and training of multiplication tables that I
had the feeling I already mastered.</p>
<p>Then something changed. Subtly at first - in grade 7, it started to gain
texture, it got beyond the rote calculations ever so slightly. And so I
started devouring the old popular mathematics texts my father kept in
his bookcases. Soon, I stumbled across a new word - "integral calculus"
- and of course asked my father to explain it. And thus it was that I,
at the age of 13, got introduced to limits, derivatives and integrals.</p>
<p>I started accelerating. My teacher soon gave up on the sisyphian task of
keeping me occupied, and halfway through grade 8 told me to go on at my
own speed. So I did. By the end of grade 8, I had finished the
mathematical curriculum of grade 9. I started on grade 10-stuff - from
the first year of Swedish secondary education. And by the way I switched
schools and started with grade 10, I was in the situation that I could
go up to my new teacher on the first day and ask for the next book.</p>
<p>This still hadn't ignited the passion. Merely prepared a glow of
fascination.</p>
<p>Then I went to Germany for a year. During my time there, I encountered a
more formalistic lecture style; more interesting lecture content; and a
mathematics teacher who quickly recognized my interest in the subject
and thus brought me along to a series of workshops, held at the
University of Erlangen by their (now late) didactics professor Frau
Prof. Juditha Cofman.</p>
<p>Cofman has been very engaged in the question of harnessing budding
interest in mathematics. She has written several books on her methods.
You should go find them.</p>
<p>At the core of Cofmans methods lies the exposure to the potentially
passionate students to the possibility of doing guided original work.
She ran two parallell workshops. One for students in general - where she
ran through things like Mathematical Induction and Combinatorial
Geometry at a very high speed, radiating her own interest and giving all
of us the feeling that there is MORE out there if we only look in the
right direction. The other workshop was containing students she had
handpicked from various directions - but mainly from these first
workshops. This workshop kept on longer than the general one, and delt
with preparing the participants for the next Junior Mathematical
Congress.</p>
<p>As preparation, she spent most of each meeting running through some new,
charming, challenging subject - such as "Interpretations of the
Fibonacci numbers", "Interpretations of the Catalan numbers", "Pascals
triangle modulo some small number", "Dragon curves" et.c. As soon as she
detected interest from a student, she immediately gave suggestions as to
new directions to investigate. New things to do. Variations on known
results, but in directions where no work had been done so far. Two guys
were doing one 3d-generalization each of the dragon curve. Two girls
were dealing with Catalan and Fibonacci numbers. I started writing the
reduced Pascal triangle in my notepad, colouring it by the numbers, and
showed her in the break. Result? I ended up submitting a poster (though,
alas, not participating myself) to the Junior Mathematical Congress
1998.</p>
<p>I owe my passion to Frau Professorin Cofman. It is as simple as that. It
was more or less impossible to stay uninterested in the charge of her
enthusiasm - it permeated everything.</p>
<p>I have seen quiet a few students grow interested - and grow
disinterested - in mathematics by now. Friends. Fellow students. Kids I
encounter through the Young Scientists Association. And one of the
absolutely dominant themes is the influence of teacher rôles. If a
student encounters a teacher bored with the subject, the student loses
his interest swiftly - unless the interest is already enough of a
passion for it to survive the encounter. My physics teacher after
Germany killed my physics interest. He couldn't touch my mathematics
passion, though, since he was so obviously incompetent at it that I
stopped listening. If the student, on the other hand, encounters someone
who is passionate enough to radiate it - to light up the classroom with
their own love for the subject - there is no other boost like it.
Almost. You could always go send the student to a Junior Mathematical
Congress - or a mathcamp for that matter - and by meeting other kids
with just as much, if not more, love for the subject as the student has,
the fascination skyrockets.</p>
<p>I realize, and recognize, that A-students are left to fend for
themselves so that the teachers resources can be allocated where it's
needed - where the bulk of the students actually gain something. But
still, I feel that the step from glow to flaming passion can be so
short, if only taken with the right kind of boots, that I can not
refrain from pushing more people to take my step with the means I know
have been effective.</p>
<p>And this is why the foundation of Junior Mathematical Societies, the
organization of Junior Mathematical Congresses and getting the
initiatives that are underway to communicate with each other is so damn
important.</p>
This is me losing all faith in Non-Linear Analysis2006-03-28T12:30:00+02:00Michitag:blog.mikael.johanssons.org,2006-03-28:this-is-me-losing-all-faith-in-non-linear-analysis.html<p>A paper recently up on <a class="reference external" href="http://arxiv.org/abs/math/0603599">arXiv details the errors committed by an author
of a paper in Non-Linear
Analysis</a>, who, by ignoring basic
conditions of theorems manages to prove most of mathematics and
substantial parts of physics inconsistent.</p>
<p>This is the second insufficiently reviewed paper at that Journal causing
some sort of waves spreading as far as to me so far. The blogospheric
and medial storm around the infamous "proof" by Elin Oxenhielm of the
16th Hilbertian problem a few years ago was, at the core, sparked from
her getting the paper accepted at ... right, Non-Linear Analysis ... and
taking this publication as a token that her results were in fact true
and anyone critizising here were out to steal her credit.</p>
<p>Needless to say, with the density displayed thus far of crackpotism and
sloppy publishing, I don't think I'll trust NLA for anything at all in
the future.</p>
Report from Villars (5 in a series)2006-03-12T12:12:00+01:00Michitag:blog.mikael.johanssons.org,2006-03-12:report-from-villars-5-in-a-series.html<p>For the last two half-days of the conference, I managed to take a break
in skiing precisely when the conditions were at their very worst; with
sight down to a few meters and angry winds. Miles Gould and Arne Weiner,
however, managed to sit in a chair lift that kept stopping every 5
meters - AND they managed to break a T-bar lift. Suddenly the rope
broke, they told me, and they had to ski down to the warden with the
T-bar in the hand.</p>
<p>First out in this mathematical expose, though, is André Henriques,
talking about</p>
<div class="section" id="an-operad-coming-from-representation-theory">
<h2>An operad coming from representation theory</h2>
<p>There is a way to connect to a finite Lie algebra [tex]\mathfrac
g[/tex] first it's universal enveloping algebra [tex]U\mathfrac g[/tex]
and quantum groups [tex]U_q\mathfrac g[/tex]. From representations of
[tex]U_q\mathfrac g[/tex], one path leads on over braided tensor
products to braided tensor categories. Such categories are described by
[tex]E_2[/tex] operads, which occur in the study of Gerstenhaber
algebras and their homology.</p>
<p>By instead of studying representation, studying crystals, Henriques
finds an interesting operad as the result of an analogous chain of
associations. A crystal is built up in analogy to a representation; as
follows:</p>
<table border="1" class="docutils">
<colgroup>
<col width="53%" />
<col width="47%" />
</colgroup>
<thead valign="bottom">
<tr><th class="head">Representation</th>
<th class="head">Crystal</th>
</tr>
</thead>
<tbody valign="top">
<tr><td>V a vector space with direct sum decomposition to weight spaces</td>
<td>B a finite set with disjoint union decomposition</td>
</tr>
<tr><td><dl class="first last docutils">
<dt>Chevalley operators:</dt>
<dd>[tex]e_i:V(\lambda)\to V(\lambda+\alpha_i)[/tex]
[tex]f_i:V(\lambda)\to V(\lambda-\alpha_i)[/tex]</dd>
</dl>
</td>
<td><dl class="first last docutils">
<dt>Raising and lowering operators:</dt>
<dd>[tex]e_i:B(\lambda)\to B(\lambda+\alpha_i)[/tex]
[tex]f_i:B(\lambda)\to B(\lambda-\alpha_i)[/tex]</dd>
</dl>
</td>
</tr>
</tbody>
</table>
<p>Would it be possible to find bases for the representation that gets
mapped to "itself" under the operators, then the work would be done
here, and crystals would be the same as representations. This is,
however, not possible.</p>
<p>Which isn't to say that they don't have anything to do with each other.
There is a bijection of isomorphism classes between representations and
crystals over a Lie algebra [tex]\mathfrac g[/tex]</p>
<p>The analogy to the braid group, when studying the categorical properties
of the crystals, is the cactus group - which also is the fundamental
group of the manifold whose points correspond to isomorphism classes of
real algebraic curves such that</p>
<ol class="arabic simple">
<li>each component is homeomorphic to [tex]\mathbb{RP}^1[/tex]</li>
<li>the components are glued together along a tree</li>
<li>all singularities are at gluepoints between components</li>
<li>each component has at least three points - either crossing points or
marked points</li>
</ol>
<p>These end up being governed by an operad; which in turn has
2-Gerstenhaber algebras as their homology.</p>
<p>Scherer, later, held a talk on</p>
</div>
<div class="section" id="relative-homotopy-cyclic-homology">
<h2>Relative homotopy cyclic homology</h2>
<p>in which he seems to want to recast the +-construction of Quillen in
operadic terms.</p>
<p>Next morning, the funky stuff starts. First out is Eugenia Cheng</p>
</div>
<div class="section" id="operads-and-multicategories">
<h2>Operads and multicategories</h2>
<p>The talk was expository, early, brilliant and very lucid. In my humble
opinion the best of the whole conference.</p>
<p>Cheng set out to illustrate the theory and current state of research of
multicategories; and did this by displaying multicategories as a
simultaneous generalisation of operads and categories. Operads describe
operations with several inputs and one input. This will be generalized
to encompass both many different objects and inputs in some interesting
configuration. (This requires nifty pictures which I don't really have
any decent way of reproducing here. Poke me if you want the pictures
that go with the text...)</p>
<p>Categorically, we're motivated by the possibility of getting composition
of higher cells (0-cells = objects, 1-cells = morphisms/arrows, 2-cells
= natural transformations, .... an arrow between two n-1-cells is an
n-cell)</p>
<p>Topologically, this gives us a way to deal with composition in loop
spaces; or to even deal with "path spaces" with many possible objects,
but compatibility requirements on compositions of paths.</p>
<div class="line-block">
<div class="line">So. To get her idea through, Cheng starts by giving the definition of
category as she sees it:</div>
<div class="line-block">
<div class="line">A category C is a collection ob C of objects, for any pair of
objects, a set of arrows [tex]a\to b[/tex], for any object a
canonical arrow [tex]a\to a[/tex] and a composition [tex]a\to b\to
c[/tex]; with axioms that make this work as expected.</div>
</div>
</div>
<p>A multicategory C, as defined by Lambek in 1969, is a collection of
objects ob C, for any sequence of inputs [tex]a_1,\dots,a_n[/tex] and
an output, b, a set [tex]C(a_1,\dots,a_n;b)[/tex] of arrows, with a
canonical arrow in C(a;a) and a composition of arrows.</p>
<p>A non-symmetric operad in the category of sets is a multicategory with
one object. The hom-sets can be indexed by their number of inputs, and
compotision needs no source/target matching.</p>
<p>This way of looking at categories with a single object is quite useful:</p>
<table border="1" class="docutils">
<colgroup>
<col width="56%" />
<col width="44%" />
</colgroup>
<thead valign="bottom">
<tr><th class="head">Many</th>
<th class="head">One</th>
</tr>
</thead>
<tbody valign="top">
<tr><td>category</td>
<td>monoid</td>
</tr>
<tr><td>groupoid</td>
<td>group</td>
</tr>
<tr><td>bicategory</td>
<td>monoidal category</td>
</tr>
<tr><td>multicategory</td>
<td>operad</td>
</tr>
<tr><td>abelian category</td>
<td>ring</td>
</tr>
<tr><td>topological category</td>
<td>topological monoid</td>
</tr>
<tr><td>topologial multicategory</td>
<td>topological operad</td>
</tr>
</tbody>
</table>
<p>The topological varieties go on beyond the pair listed here, but the
idea is clear.</p>
<p>In some terminology, multicategories are called "coloured operads".
Cheng points out that in that case, it would be consistent to call
categories "coloured monoids" and groupoids "coloured groups" et.c.;
which would be ... poetic.</p>
<div class="section" id="some-examples">
<h3>Some examples</h3>
<p>A category is a multicategory where every arrow is unary.</p>
<p>A monoidal category has an underlying multicategory; with arrows in
[tex]C(a_1,\dots,a_n;b)[/tex] given by the arrows
[tex]a_1\otimes\dots\otimes a_n\to b[/tex] for [tex]\otimes[/tex]
the monoidal operation.</p>
<p>For many monoidal thingies, the monoidality is not needed, but merely a
multicategory conditions. As the categorists like to hunt down minimal
required conditions, this is a very relevant observation.</p>
<p>Now, this multicategory game could just as well be seen as using a
<em>monad</em> to capture the input. More specifically, the "free monoid"-monad
[tex]\mathcal F[/tex] (i.e. the composition of the adjoint pair of a
forgetful and a free functor) gives rise to these strings of inputs that
occured up previously. By replacing [tex]\mathcal F[/tex] by any other
monad T, and expanding the obvious (for some value of obvious) axioms
and propositions, you get a sane theory for T-multicategories.</p>
<p>By expanding on this idea, various combinations of base categories and
monads in these give rise to a new, cool and exciting sequence of new
categories.</p>
<p>Tom Leinster in Glasgow has expanded on the idea of PROPs by introducing
monads in a similar manner.</p>
<p>After this talk, Mark Weber gave a highly technical talk on</p>
</div>
</div>
<div class="section" id="applications-of-2-categorical-algebra-to-the-theory-of-operads">
<h2>Applications of 2-categorical algebra to the theory of operads</h2>
<p>This talk tried, it seems, to generalize operads to higher category
levels. I didn't really understand much. At all.</p>
<p>Finally, Peter May gave</p>
</div>
<div class="section" id="an-oldfashioned-elementary-talk">
<h2>An oldfashioned elementary talk</h2>
<p>Peter May started by lamenting that:</p>
<blockquote>
<div class="line-block">
<div class="line">I had good results in topology to talk about - but not all here
are topologists.</div>
<div class="line-block">
<div class="line">I had nice category theory to talk about - but not all here are
categorists.</div>
<div class="line">I knew some nice things about operads in algebraic geometry - but
definitely not all here are algebraic geometers</div>
</div>
</div>
</blockquote>
<p>So he ended up talking about constructions of Steenrod operations in
various situations, based on a construction through the Eilenberg-Zilber
operad. Among other things, it seems that for good algebras A,
[tex]\operator{Ext}_A^{*,*}(\mathbb F_p,\mathbb F_p)[/tex] has
Steenrod operations. What this means, and whether it's of any use
outside of topology, is unknown to me and quite interesting. Or so it
seems.</p>
</div>
Report from Villars (4 in a series)2006-03-08T14:09:00+01:00Michitag:blog.mikael.johanssons.org,2006-03-08:report-from-villars-4-in-a-series.html<p>I haven't been able to get around to skiing since the last update - I
may, or may not, go out in the slopes after this updates. The weather is
growing warmer and wetter; and doesn't really invite to skiing as it
previously did.</p>
<p>However, we have had more talks. First out, yesterday evening, was
Pascal Lambrechts</p>
<div class="section" id="coformality-of-the-little-ball-operad-and-rational-homotopy-types-of-spaces-of-long-knots">
<h2>Coformality of the little ball operad and rational homotopy types of spaces of long knots</h2>
<p>The theme of interest for this talk was long knots; i.e. embeddings of
[tex]\mathbb R[/tex] into [tex]\mathbb R^d[/tex] such that outside
some finite region in the middle, the embedding agrees with the trivial
embedding [tex]t\mapsto(t,0,0,\dots,0)[/tex]. The space of all such is
denote [tex]\mathcal L[/tex], and the item of study is more precisely
the rational homology and rational homotopy of the fiber of the
inclusion of [tex]\mathcal L[/tex] into the space of all immersions of
[tex]\mathbb R[/tex] into [tex]\mathbb R^d[/tex].</p>
<p>Vassiliev approached this by constructing a spectral sequence converging
to the homology for high enough d, and where the first term has a
combinatorial description in terms of chord diagrams. The idea, building
on some preparatory work by Dev Sinha, is to replace the long knot with
a sequence of very many distinct points on the knot. Thus you end up in
the configuration space of many knots, which can be put in the context
of being (a compactification of) configuration spaces; which in turn
form a cosimplicial space. These spaces, in fact, are (more or less) the
Fulton-MacPherson operad, and using this correspondence, Lambrechts is
able to show that the spectral sequence by Vassiliev collapses.</p>
<p>Paolo Salvatore had a speech on mainly the same theme; however, I
haven't really gotten any substantial notes out of that speech.</p>
<p>Today, however, started with Dev Sinha</p>
</div>
<div class="section" id="the-duality-between-lie-and-comm-revisited">
<h2>The duality between Lie and Comm revisited</h2>
<p>Introducing a function called a Configuration pairing; which takes a
directed graph on [n] and a rooted, half-planar tree with leafs in [n]
to an integer; by sending to 0 if the map that takes an edge to the
lowest vertex of the path in the tree between the corresponding leafs is
not bijective; and sending to (-1):sup:<cite>k</cite> if there are k edges such
that the corresponding path runs right to left otherwise, we can get a
function on the pairs that vanishes on anti-symmetry and Jacobian
identity of the trees; and on anti-symmetry, loops and the Arnold
identity - meaning [tex]a\to b\to c + b\to c\to a + c\to a\to
b[/tex] vanishes - on the graph half; we get a perfect pairing between
the slices Lie(n) generated by the trees and the slices Eil(n) generated
by the graphs. Lie(n) has a basis consisting of tall trees (i.e.
(((((((1,i1),i2),....,in)) and Eil(n) a basis of long graphs (i.e. paths
beginning with 1). The pairing matches a tree and a graph if they have
the same indices in the same order.</p>
<p>Using this, and involving the binomial operad, he then manages by
introduction of co-Lie-brackets by removing an edge of the graph and
taking antisymmetric differences to make the Eil operad describe Lie
co-algebras; and thus gives a way to construct a linear duality between
DG-Lie-algebras and DG-Lie-Coalgebras which does not go the way over bar
and cobar constructions.</p>
<p>After Sinha, Martin Markl spoke on</p>
</div>
<div class="section" id="cohomology-operations-and-the-deligne-conjecture">
<h2>Cohomology operations and the Deligne conjecture</h2>
<p>He characterizes the renaissance of operad theory as being all about
calculating Kontsevich graph (co)homology in various settings. More
specifically, he looks on the operad of "natural operations" on
multilinear functions - i.e. anything given by free compositions and
pointwise multiplications. This turns out to be a rather big object, and
many conjectures and questions were asked and discussed. In the ideal
case, Markl wishes, the structure of the this operad will give
information about the theory it's based in.</p>
<p>Finally, Jonathan Scott had a speech on</p>
</div>
<div class="section" id="co-rings-of-operads">
<h2>Co-rings of operads</h2>
<p>In which he gives a way of viewing an operad as a monoid, whose left
modules are the algebras and right modules are the co-algebras of the
operad.</p>
<p>That is, basically, all there is to report at this point. For all
things, just poke me if you're interested in more details.</p>
</div>
Report from Villars (3 in a series)2006-03-07T16:29:00+01:00Michitag:blog.mikael.johanssons.org,2006-03-07:report-from-villars-3-in-a-series.html<p>This post will concern tuesday morning. Tuesday evening will be in a
later post.</p>
<p>With the morning thus came, again, the pain in the legs. However, I'm
told it'll be better if I keep on skiing.</p>
<p>The mathematics in this report will come sooner than in the last; mainly
because the lectures start at 8.30 and not at 17.00. :)</p>
<p>First out is Bênoit Fresse with</p>
<div class="section" id="little-cubes-operad-actions-on-the-bar-construction-of-algebras">
<h2>Little cubes operad actions on the bar construction of algebras</h2>
<p>The reduced bar construcion of augmented associative algebras is given
by fixing a field [tex]F[/tex], and for an augmented associative algebra
[tex]A[/tex] giving a chain complex [tex]B(A)[/tex] such that
[tex]B_n(A)=\hat A^{\otimes n}[/tex] where [tex]\hat A[/tex] denotes
the augmentation ideal of [tex]A[/tex] with the differential
[tex]\partial(a_1\otimes\dots\otimes_n)=\sum_{i=1}^{n-1}a_\otimes\dots\otimes
a_ia_{i+1}\otimes\dots\otimes a_n[/tex]</p>
<p>If the product of [tex]A[/tex] is commutative, then the shuffle product
of tensors provides [tex]B(A)[/tex] with the structure of a differential
commutative algebra. In the talk, Fresse starts looking at the algebraic
structure of [tex]B(A)[/tex] for algebras with a homotopy commutative
algebra:</p>
<div class="section" id="results">
<h3>Results</h3>
<ol class="arabic simple">
<li>If [tex]A[/tex] is an [tex]E_\infty[/tex] algebra, then
[tex]B(A)[/tex] can be equipped with the structure of an
[tex]E_\infty[/tex] algebra.</li>
<li>If [tex]A[/tex] is an [tex]E_n[/tex] algebra, then [tex]B(A)[/tex]
can be equipped with the structure of an [tex]E_{n-1}[/tex] algebra.</li>
</ol>
<div class="line-block">
<div class="line">1 was obtained by Smirnov, by Justin Smith, ...</div>
<div class="line-block">
<div class="line">2 was obtained for n=2 by an explicit construction by Hans Baues.</div>
<div class="line">For the cochain algebra of a space [tex]A=C^*(X)[/tex] similar
structure results have been obtained by Kadeishvili-Saneblidze and by
Hess-Parent-Scott.</div>
</div>
</div>
<p>1 can be motivated in arXiv:math.AT/0601085.</p>
<p>2 is a consequence of the assertion that the endomorphism prop of the
bar construction is equivalent to the prop of co-associative and
[tex]E_{n-1}[/tex]-multiplicative bialgebras.</p>
<p>Next up is Clemens Berger, talking about</p>
</div>
</div>
<div class="section" id="on-the-combinatorial-structure-of-en-operads">
<h2>On the combinatorial structure of E<sub>n</sub> operads</h2>
<p>May et.al. studied the n-fold loop spaces
[tex]\Omega^nX=\operator{Top}_*(S^n,X)[/tex] of maps of the n-sphere
to the space, by considering the k-fold n-fold loop space, i.e. maps
[tex]S^n\to S^n\wedge S^n\wedge\dots\wedge S^n=\mathcal
O_n(k)[/tex]. It turns out that it isn't necessary to study the entire
space of maps [tex]S^n\to\mathcal O_n(k)[/tex] but it's enough to
look at the suboperad of "little n-disks" [tex]\mathcal D_n[/tex].</p>
<p>Theorem (Boardmann-Voft, May, Segal): Each n-fold loop space is
canonically a [tex[\mathcal D_n[/tex] algebra.</p>
<p>[tex]\mathcal D_\infty[/tex] gives rise to an [tex]E_\infty[/tex]
operad. [tex]\mathcal D_1[/tex] gives rise to an [tex]E_1[/tex]
operad. If we can find a similar intrinsic characterisation of
[tex]\mathcal D_n[/tex] operads for other n, it would yield a
combinatorial structure of an n-fold loop space.</p>
<p>Fiedorowicz has been able to obtain such a classification for n=2 (with
symmetric props replaced by braid groups).</p>
<p>The next talk was held by Muriel Livernet on</p>
</div>
<div class="section" id="the-associative-operad-and-the-weak-order-on-the-symmetric-groups">
<h2>The associative operad and the weak order on the symmetric groups</h2>
<p>Livernet started out by defining symmetric and nonsymmetric operads, and
gave a pair of adjoint functors - the forgetful functor from symmetric
to non-symmetric operads, and tensoring with the group algebra to
symmetrize a non-symmetric operad.</p>
<p>At this point, parts of the lighting in the room fell down on the
audience. No serious consequences.</p>
<p>The weak Bruhat order can be defined as ordering permutations
[tex]\sigma\leq\tau[/tex] if
[tex]\operator{Inv}(\sigma)\subseteq\operator{Inv}(\tau)[/tex]
where [tex]\operator{Inv}[/tex] is the inversion set of the
permutation, i.e. the set of pairs i,j such that i\sigma_j[/tex].</p>
<p>Generating a new basis for the associative operad, using the Möbius
inversion on the relevant Bruhat order, the composition rule ends up
being a summation over a specific interval in the Bruhat order.</p>
<p>Interesting combinatorial question asked: How many permutations are
there (of a fixed length) such that no proper substring of the
permutation (written as a permuted string) is an entire interval. For
instance, (2,4,1,3) has no proper substring that fills out an interval,
whereas (2,4,3,1) has the substring 2,4,3 which fills out [2,4].</p>
<p>Shouldn't this question be easily answered by simply counting all
permutation that DO have a subinterval? It's easy enough to count your
way through the intervals - choose a length and a startingpoint. There
are length! ways to embed that interval, and easily enough controlled
number of points to insert the interval. The remaining parts are also
easily distributed. However, double counting will occur - which messes
up the counting quite a bit. Maybe using inclusion-exclusion will lead
the way? For, for instance [2,4] to be counted a particular time, you'd
want [2,4] to be embedded without contact to 3 and 5, as well as the
remaining two sections to be interval-free. Maybe it's doable with some
sort of recursion? There are, even then, cases of permutations with
several intervals embedded, for instance (2,4,3,6,9,8,1,7) which
contains [2,4] and [8,9].</p>
</div>
Report from Villars (2 in a series)2006-03-06T23:07:00+01:00Michitag:blog.mikael.johanssons.org,2006-03-06:report-from-villars-2-in-a-series.html<p>So we hit the pistes during monday morning, those of us who actually
already are here. Me, Bruno Vallette (Hi Stockholm!), Arne Weiner, Miles
Gould, Paul Eugene Parents and Jonathan Scott, Dev Sinha and Muriel
Livernet. Skiing was MARVELOUS. Me, Arne and Miles shot off on our own,
and damn did we have a good time.</p>
<p>As I'm writing this, they're still out there - I went back when the pain
in my legs caused tears in my eyes for just turning on the skis. The
techniques were solid as concrete. The muscles not so much. It took half
an hour in the sauna to get to the point where I actually was able to
walk again.</p>
<p>Among the more amusing things that happened was that when I was stopping
to wait up for Arne and Miles in a narrow forest path, I ended up
standing too close to the edge, which subsequently gave up and dropped
me down into a few meters of powder just under a fern tree. Getting out
of there was awkward - to begin with my legs were in the wrong angle to
get out of the ski bindings; and once Miles helped me out, the only
reason I wasn't buried in snow to my shoulders was that I packed it as I
stood on it, and the carrying point ended up being roughly waistdeep.</p>
<p>Go on. You try it. Get up from waistdeep loose powder snow. Straight up,
a meter or so, onto that hardened shell that you once were skiing on.
It's an extremely amusing and rather hard exercise.</p>
<p>Now, for the mathematics. The (only) talk today was by Bruno Vallette,
on</p>
<div class="section" id="manin-products-and-koszul-duality">
<h2>Manin products and Koszul duality</h2>
<p>For associative quadratic algebras, given as a quotient of the free
tensor algebra on a space [tex]V[/tex] by [tex]A(V,R)=T(V)/(R)[/tex],
Manin defines two different products, by [tex]A(V,R)\circ
A(W,S)=A(V\otimes W,\tau(R\otimes W^2+V^2\otimes S))[/tex] for
[tex]\tau a\otimes b\otimes c\otimes d=a\otimes c\otimes b\otimes
d[/tex] the standard twisting homomorphism; and [tex]A(V,R)\bullet
A(W,S)=A(V\otimes W,\tau(R\otimes S))[/teX]. The main and most
relevant result here is that the two products are Koszul dual to each
other, i.e. [tex](A\circ B)^!=A^!\bullet B^![/tex].</p>
<p>Using the concept of lax 2-monoidal categories, it is possible to
generalize this to basically any possible interesting category - with
the same particular construction viable for Algebras, Nonsymmetric and
symmetric operads, dioperads, coloured operads, properads, PROPs (and
probably ½PROPs as well).</p>
<p>Notes on this will be forthcoming, and I'll post here once I get a
relevant URL.</p>
</div>
Report from Villars (1 in a series)2006-03-05T20:37:00+01:00Michitag:blog.mikael.johanssons.org,2006-03-05:report-from-villars-1-in-a-series.html<p>So, I've arrived in Villars sur Ollon for the Alpine Operad Workshop.
The travel was long and at times annoying, mainly because the heavy
snowfall over München and Zürich and some other places in the region
triggered extreme delays. As we were supposed to board, the poor
attendant at Nürnberg airport told us that the plane had not yet
departed from Zürich.</p>
<p>Except for that, though, the travel went fine, and after being treated
to some immensely beautiful views (glittering lake of Genéve with rows
and rows of snowcovered grapevines in front, anyone?) and reminded of
just how much I miss the deep-snow winters, I got up on this mountain in
southwestern (very much frenchspeaking) Switzerland to the Hotel du
Golf. The receptionist told me, straight off, that a number of my
colleagues had already arrived, and then I went to eat (Crêpes -
expensive and not even correctly delivered...) and started wrestling the
connector dance. Y'see, half of the connectors used in civilized Europe
work here. The other half don't. And those who do work, only do work if
they're impeccably straight. So I had to work for quite a while to
actually, y'know, get my laptop, my mp3player, my loudspeakers (I sleep
with music, mmmkay? Headphones are NOT very nice to sleep in, mmmkay?)
and my cell phone all connected. I think I disconnected 75% of the room
lights in the process.</p>
<div class="line-block">
<div class="line">Just before leaving, i.e. yesterday, I discovered that the colleague
of <a class="reference external" href="http://gooseania.blogspot.com">Gooseania</a>, <a class="reference external" href="http://hadizare.blogspot.com">Hadi
Zare</a> is also coming here to this
workshop. I plan on saying Hi. :)</div>
<div class="line-block">
<div class="line">Most of my raison d'être for this trip has evaporated since friday
though. It was supposed to be a threefold thing: Skiing, preparing for
an Operadic PhD and looking for application spots to get that PhD
position. Since I now HAVE a position (Jena, Group Cohomology) and
it's not in operads, the latter two kinda fell into oblivion. But I'm
looking forward to skiing!!</div>
</div>
</div>
<p>I will do a 30-minute stunt online each day, trying to cram it full of
updating here, reading my mail, webcomics, livejournals, blogs,
whatever, calling my fiancee on Skype and generally hanging out on IRC.</p>
<p>Don't count on me being very available...</p>
Anti-spam measures2006-02-21T20:24:00+01:00Michitag:blog.mikael.johanssons.org,2006-02-21:anti-spam-measures.html<p>After having a visit of some 5+ spams (I'm known by the spam bots! Is
that good?) I just installed a captcha-like plugin. It requires some
basic arithmetic skills from y'all; but on the other hand, I -am- a math
blog after all.</p>
Monads, algebraic topology in computation, and John Baez2006-02-21T11:06:00+01:00Michitag:blog.mikael.johanssons.org,2006-02-21:monads-algebraic-topology-in-computation-and-john-baez.html<p>Todays webbrowsing led me to <a class="reference external" href="http://math.ucr.edu/home/baez/week226.html">John Baez finds in mathematical physics
for week 226</a>, which led
me to snoop around <a class="reference external" href="http://math.ucr.edu/home/baez/">John Baez
homepage</a>, which in turn led me to
stumble across the <a class="reference external" href="http://iml.univ-mrs.fr/geocal06/">Geometry of
Computation</a> school and conference
in Marseilles right now.</p>
<p>This, in turn, leads to several different themes for me to discuss.</p>
<div class="section" id="cryptographic-hashes">
<h2>Cryptographic hashes</h2>
<p>In the weeks finds, John Baez comes up to speed with the cryptographic
community on the broken state of SHA-1 and MD-5. Now, this is a drama
that has been developing with quite some speed during the last 1-1½
years. It all began heating up seriously early 2005 when Wang, Yin and
Yu presented a paper detailing a serious attack against SHA-1. Since
then, more and more tangible evidence for the inadvisability of MD-5 and
upcoming problems with SHA-1 have arrived - such as several example
objects with different contents and identical MD-5 hashes: postscript
documents (Letter of Recommendation and Access right granting), X.509
certificates et.c.</p>
<p>The attacks that occur are what is called <em>collision attacks</em> by those
in the trade. They find data [tex]x[/tex] and [tex]y[/tex] such that
[tex]H(x)=H(y)[/tex]. The other serious attack is the <em>preimage attack</em>,
where to a specific hash [tex]y[/tex], a message [tex]x[/tex] is found
such that [tex]H(x)=y[/tex]. Once preimages can be found, it's time to
bail ship quickly. Up until then, it may possibly be enough to pack the
life boats and look toward the horizon for possible rescues or deserted
islands.</p>
<p>Very interesting reads and link sources for this can be found, as with
most things cryptographic, with <a class="reference external" href="http://schneier.com">Bruce
Schneier</a>.</p>
</div>
<div class="section" id="topology-of-computation">
<h2>Topology of computation</h2>
<p>John Baez talks about <a class="reference external" href="http://math.ucr.edu/home/baez/universal/">some
lectures</a> he gave at the
Geometry of Computation conference in the section of Universal Algebra
and Diagrammatic Reasoning. Reading through what he did and looking over
the conference as such, I realize I would very much have wanted to be
there. They're doing linguistics and algebraic topology together, for
one!</p>
<p>John Baez' talk (well worth to review the slides! Go do it!) goes
through a lot of the themes I'm busying myself with from the vantage
point of computer science and computation theory. He talks about PROPs,
about PROs, about Monads and categorical monoids, about diagrams and
algebraic theories et.c. Most of it very interesting, and rather close
to my own work. If I now can work out some good method to display that
the Koszul resolutions of PROPs are the same kind of resolutions that
Baez uses to get simplicial objects representing computations, I might
just close in to something that may give a good explanation as to why
the things I like are relevant.</p>
<p>I will bring my post on Operads and PROPs at some point in the future.
As will I bring my expose over Koszulness. But not today.</p>
</div>
<div class="section" id="geometry-of-roots">
<h2>Geometry of roots</h2>
<p>The third eye-catcher I found was <a class="reference external" href="http://math.ucr.edu/home/baez/roots.html">an absolutely gorgeous
picture</a> of all roots of
polynomials of degree at most 5 and with integer coefficients in
[tex][-4,4][/tex]. It makes me want to build more pictures. Now, if I
may. Dunno if this is what I -should- be spending the processor cycles
of my workplace on, but damn, they are gorgeous!</p>
<p>I wonder how hard it would be to hack a Pari/GP script that spews out
coordinates and colour tags for this with coefficients in
[tex][-5,5][/tex] in a format amenable to producing pretty graphs. Some
chosen thoughts awake with the comments from John Baez - among other
things, he states that</p>
<blockquote>
Odlyzko and Poonen proved some interesting things about the set of
all roots of all polynomials with coefficients 0 or 1. If we define
a fancier Christensen set [tex]C_{d,p,q}[/tex] to be the set of
roots of all polynomials of degree [tex]d[/tex] with coefficients
ranging from [tex]p[/tex] to [tex]q[/tex], Odlyzko and Poonen are
studying [tex]C_{d,0,1}[/tex] in the limit [tex]d\to\infty[/tex].
They mention some known results and prove some new ones: this set is
contained in the half-plane [tex]Re(z) < 3/2[/tex] and contained in
the annulus [tex]1/\phi < |z| < \phi[/tex] where
[tex]\phi[/tex] is the golden ratio [tex](\sqrt5 + 1)/2[/tex]. In
fact they trap it, not just between these circles, but between two
subtler curves. They also show that the closure of this set is path
connected but not simply connected.</blockquote>
<p>Now, if the set isn't simply connected, then I'm immediately growing
interested in the homology and homotopy of the set.</p>
</div>
Reading Merkulov: Differential geometry for an algebraist (3 in a series)2006-02-13T13:55:00+01:00Michitag:blog.mikael.johanssons.org,2006-02-13:reading-merkulov-differential-geometry-for-an-algebraist-3-in-a-series.html<p>Since I cannot concentrate anyway, here comes the third installment of
my reading Merkulov. We now get to the funky stuff - introducing sheaves
and getting to what should be at the very start of basically any modern
geometry course. At least if I am to believe the geometers I know.</p>
<p>So, let's launch straight to it. A <em>presheaf</em> [tex]\mathcal F[/tex] on
the topological space [tex]\mathcal M[/tex] is just a contravariant
functor from [tex]Top(\mathcal M)[/tex] to [tex]Ab[/tex], where
[tex]Top(\mathcal M)[/tex] is the category of open subsets of
[tex]\mathcal M[/tex] with morphisms being inclusion maps.</p>
<div class="line-block">
<div class="line">So that's the one-line definition. But what does it mean?</div>
<div class="line-block">
<div class="line">Well, a functor is a map between categories that takes objects to
objects and morphisms to morphisms. So we have that [tex]\mathcal
F(U)[/tex] is an abelian group for any open set
[tex]U\subset\mathcal M[/tex]. For such a map to really be a
functor, it has to be sane in a rather precisely defined sense: namely
morphism composition should still be associative and the identity
endomorphism on a group shouldn't actually, ya'know, change the
morphisms before or after it.</div>
<div class="line">For the functor to be contravariant means precisely that for
[tex]f:U\to V[/tex] we get [tex]\mathcal F(f):\mathcal
F(V)\to\mathcal F(U)[/tex] - all arrows reverse by application of
the functor.</div>
</div>
</div>
<p>And for our definition of presheaves? We can read out that for every
inclusion of open subsets of our space [tex]U\subseteq V[/tex] we get a
group homomorphism [tex]\rho_U^V:\mathcal F(V)\to\mathcal
F(U)[/tex]. Functoriality requires these homomorphisms to be sane - i.e.
[tex]\rho_U^U=\mathbb 1_{\mathcal F(U)}[/tex] and
[tex]\rho_W^V\circ\rho_V^U=\rho_W^U[/tex] whenever
[tex]W\subseteq V\subseteq U[/tex].</p>
<p>We will most often, as soon as it is clear what sheaf we work with,
stick to denoting the [tex]\rho^U_V(f)[/tex] with [tex]f\mid_V[/tex]
and call it the restriction of [tex]f[/tex] to [tex]V[/tex].</p>
<div class="section" id="example">
<h2>Example</h2>
<p>Our first example will be the constant presheaf: for [tex]\mathcal
M[/tex] a topological space and [tex]\mathcal A[/tex] an abelian group,
we define [tex]\mathcal F(U)=\mathcal A[/tex] if
[tex]U\subseteq\mathcal M[/tex] is non-empty, and [tex]\mathcal
F(\emptyset)=0[/tex]. The restriction homomorphisms are the identity
whenever the subset is nonempty and the zero homomorphism to the empty
set.</p>
<p>Completely analogously, presheaves of sets, graded vector spaces,
algebras, modules over an algebra et.c. can be defined. Throughout, the
presheaves are just contravariant functors from [tex]Top(\mathcal
M)[/tex] to the relevant category.</p>
<div class="section" id="presheaves-over-presheaves">
<h3>Presheaves over presheaves</h3>
<p>Suppose [tex]\mathcal R[/tex] is a presheaf of rings with restrictions
[tex]\rho[/tex] and [tex]\mathcal M[/tex] is a presheaf of abelian
groups with restrictions [tex]\hat\rho[/tex] on some topological space
[tex]\mathcal T[/tex] such that whenever [tex]U\subseteq\mathcal
T[/tex] we know that [tex]\mathcal M(U)[/tex] is a [tex]\mathcal
R(U)[/tex]-module and for any [tex]V\subseteq U\subseteq\mathcal
T[/tex] we also know that
[tex]\hat\rho_V^U(ax)=\rho_V^U(a)\hat\rho_V^U(x)[/tex] for
[tex]a\in\mathcal R(U)[/tex] and [tex]x\in\mathcal M(U)[/tex] so
that the restrictions agree with the module structure. Then we call
[tex]\mathcal M[/tex] a presheaf of modules over a presheaf of rings
[tex]\mathcal R[/tex]. By adding structure to the modules in
[tex]\mathcal M[/tex] we can define presheaves of algebras or Lie
algebras et.c. over a fixed presheaf of (graded) commutative rings
[tex]\mathcal R[/tex].</p>
</div>
</div>
<div class="section" id="id1">
<h2>Example</h2>
<div class="line-block">
<div class="line">Let [tex]R[/tex] be a graded [tex]k[/tex]-algebra for a field
[tex]k[/tex]. A <em>derivation</em> of [tex]R[/tex] of degree [tex]n[/tex] is
an element [tex]d\in\Hom_n(R,R)[/tex] satisfying the condition</div>
<div class="line-block">
<div class="line">[tex]d(ab)=(da)b+(-1)^{|a|}a(db)[/tex]</div>
<div class="line">Let [tex]Der_nR[/tex] be the vector space of all derivations of
degree [tex]n[/tex] and let</div>
<div class="line">[tex]Der R=\bigoplus_{n\in\mathbb Z}Der_n R[/tex]</div>
<div class="line">Now, [tex]DerR[/tex] has a natural [tex]R[/tex]-module structure by
[tex]ad=l_a\circ d[/tex] where [tex]l_a:b\mapsto ab[/tex] for
[tex]a,b\in R[/tex]. Furthermore, there is a natural structure of
graded Lie algebra to [tex]DerR[/tex] with brackets given as
[tex][d_1,d_2]=d_1\circ d_2-(-1)^{|d_1||d_2|}d_2\circ
d_1[/tex].</div>
</div>
</div>
<p>Now, if [tex]\mathcal R[/tex] is a presheaf of algebras, then the
associated collection [tex]Der\mathcal R[/tex] is naturally a presheaf
of [tex]\mathcal R[/tex]-modules. It is also a presheaf of Lie algebras
on the space.</p>
</div>
<div class="section" id="id2">
<h2>Example</h2>
<p>For an open subset [tex]U\subseteq M[/tex] of a fixed topological space
[tex]M[/tex] let [tex]\mathcal E^0(U)[/tex] be the vector space of all
complex valued continuous functions on [tex]U[/tex]. For every pair of
open subsets [tex]V\subseteq U[/tex] let [tex]\rho^U_V:\mathcal
E^0(U)\to\mathcal E^0(V)[/tex] be the usual restriction of a
continuous function [tex]f:U\to\mathbb C[/tex] to
[tex]f\mid_V:V\to\mathbb C[/tex], so for [tex]v\in V
f\mid_V(v)=f(v)[/tex]. Then [tex]\mathcal E^0[/tex] is a presheaf of
continuous functions on [tex]M[/tex] and is often denoted by
[tex]\mathcal E^0_M[/tex].</p>
<p>If [tex]M[/tex] is a smooth manifold we can take smooth functions
everywhere instead, we get a presheaf [tex]\mathcal E^\infty_M[/tex]
of smooth functions on [tex]M[/tex].</p>
<p>If [tex]M[/tex] is a complex manifold we can take holomorphic functions
everywhere to get a presheaf [tex]\mathcal O_M[/tex] of holomorphic
functions on [tex]M[/tex].</p>
<div class="section" id="sheaves">
<h3>Sheaves</h3>
<div class="line-block">
<div class="line">A presheaf [tex]\mathcal F[/tex] on a topological manifold is called
a sheaf if, for every open set[tex] U\subseteq M[/tex] and every
family of open subsets [tex]U_i\subseteq U[/tex] with
[tex]U=\bigcup U_i[/tex] the following conditions are satisfied:</div>
<div class="line-block">
<div class="line">If for all [tex]i[/tex], and [tex]f, g\in\mathcal F(U)[/tex] we
have</div>
<div class="line">[tex]f\mid_{U_i}=g\mid_{U_i}[/tex]</div>
<div class="line">then [tex]f=g[/tex] and</div>
<div class="line">for every family of elements [tex]f_i\in\mathcal F(U_i)[/tex]
with the property that [tex]f_i\mid_{U_i\cap
U_j}=f_j\mid_{U_i\cap U_j}[/tex] there is some
[tex]f\in\mathcal F(U)[/tex] such that
[tex]f\mid_{U_i}=f_i[/tex] for all [tex]i[/tex].</div>
</div>
</div>
<p>So a sheaf is a presheaf such that if for a covering of an open set
equality on all covering sets implies equality in the covered set and
where if a family seems to have come from an object higher up, then that
object really does exist.</p>
<p>Note that the constant presheaf above is only a sheaf if either the
abelian group is trivial or the space contains no non-intersecting open
sets. If both of these conditions fail, then for a pair of
non-intersecting [tex]U,V[/tex] and distinct [tex]f_1,f_2[/tex] we
know that [tex]U\cap V=\emptyset[/tex] and thus that
[tex]f_1\mid_{U\cap V}=f_2\mid_{U\cap V}=0[/tex] but there is no
[tex]f\in\mathcal f(U\cup V)=A[/tex] such that
[tex]f\mid_U=f_1[/tex] and [tex]f\mid_V=f_2[/tex] since all
restrictions are identity homomorphisms and [tex]f_1\neq f_2[/tex].
Thus it's no sheaf.</p>
<p>This defect, however, is easily fixed. We define for an open subset
[tex]U \mathcal F(U)[/tex] to be the set of all locally constant maps
[tex]f:U\to A[/tex]. Clearly, if [tex]U[/tex] is connected, then
[tex]\mathcal F(U)=A[/tex] since each map then is uniquely determined
by its image. The restriction maps are then the usual restrictions of
maps. The result is a sheaf of locally constant functions with values in
[tex]A[/tex], and is often denoted by the same symbol [tex]A[/tex].</p>
<p>All the presheaves [tex]\mathcal E^0_M[/tex], [tex]\mathcal
E^\infty_M[/tex] and [tex]\mathcal O_M[/tex] are sheaves though.
Their presheaves of derivations are also sheaves.</p>
<p>The sheaf [tex]\mathcal E^\infty_M[/tex] is called the <em>structure
sheaf of a smooth manifold</em> [tex]M[/tex], whereas the associated sheaf
[tex]\mathcal T_M=Der\mathcal E^\infty_M[/tex] is called the
<em>tangent sheaf</em> or the <em>sheaf of smooth vector fields on [tex]M[/tex]</em>.</p>
<p>The sheaf [tex]\mathcal O_M[/tex] is called the <em>structure sheaf of a
complex manifold [tex]M[/tex]</em> and the <em>tangent sheaf</em> or <em>sheaf or
holomorphic vector fields on [tex]M[/tex]</em> is defined as above as the
sheaf of derivations. Note that any complex manifold also has a
structure of smooth real manifold and thus also has the associated
sheaves [tex]\mathcal E^\infty_{M_r}[/tex] and [tex]\mathcal
T_{M_r}[/tex].</p>
</div>
</div>
<div class="section" id="id3">
<h2>Example</h2>
<p>Let's define a presheaf on the complex plane as a complex manifold. For
open [tex]U\subseteq\mathbb C[/tex] define [tex]\mathcal F(U)[/tex]
to be the vector space of all bounded holomorphic functions on
[tex]U[/tex], with the restriction being the usual restriction of a
holomorphic function. This is obviously a presheaf. However, it's not a
sheaf, since with a covering [tex]\mathbb C=\bigcup_{i\in\mathbb
N}U_i [/tex]for [tex]U_i=\{z\in\mathbb C\mid|z|</p>
<p>From this we learn that non-local properties on presheaves often cause
the presheaf to fail being a sheaf.</p>
<div class="section" id="morphisms-and-categorical-structure">
<h3>Morphisms and categorical structure</h3>
<p>A morphism of (pre)sheaves [tex]\mathcal F\to\mathcal G[/tex] is
defined in the obvious way - as a family of homomorphisms of abelian
groups [tex]\mathcal F(U)\to\mathcal G(U)[/tex] such that the obvious
diagrams commute. That is it doesn't matter if you first restrict and
then follow the morphism or first follow the morphism and then restrict
- the result should be the same. These morphisms make the definition
needed to have a category of sheaves, and in this category, we receive
the usual definition of an isomorphism, of inclusions, et.c.</p>
<div class="line-block">
<div class="line">Note that the following inclusion maps are all morphisms of sheaves of
rings:</div>
<div class="line-block">
<div class="line">[tex]\mathbb C\to\mathcal O[/tex]</div>
<div class="line">[tex]\mathcal O\to\mathcal E^\infty[/tex]</div>
<div class="line">[tex]\mathcal E^\infty\to\mathcal E^0[/tex]</div>
</div>
</div>
<p>For the next installment, we're going to stalks and exact sequences. And
germs! Wouldya look at that? We've gone from maritime terminology to
agricultural terminology...</p>
</div>
</div>
This came faster than expected...2006-02-13T10:09:00+01:00Michitag:blog.mikael.johanssons.org,2006-02-13:this-came-faster-than-expected.html<p>Breaking news! Just in from /.</p>
<p>According to <a class="reference external" href="http://www.securityfocus.com/brief/134">this article</a>,
there is a Cincinnati-based company that just had two of its employees
implant glass-encapsulated RFIDtags in their biceps as a part of the
access control system to their datacenter.</p>
<p>And we're one step closer to the artificial linking of identity
verification to body parts.</p>
<p>I see two aspects to discuss here. One is of the inherent security
problems with the solution, and the other is about the sci-fi feel and
possible problems and antagonists.</p>
<p>So let's start with the second aspect. I can remember a lesson in eight
grade, discussing in our social sciences class, where I suggested use of
passive radio transmitters to implant small chips in people that would
work as a central for identification and verification. The implanted
chip would be used as ID card, as credit card et.c. et.c. and you
wouldn't have to juggle cards at all any longer. I was quite taken by
the vision I had - until my baptist pastor of a teacher started quoting
relevations on me, claiming that such an implant would be a perfect
example of how the Mark of the Beast would manifest.</p>
<p>And even if you don't take the christian whacko-angle on it, there is a
teeensy problem with a big brother society inherent in the construction.
All of a sudden, all you'd need are RFID readers with loggers placed all
over town, and everybody with a tag could be tracked. Why not tag
everyone convicted for a crime, so that we can check if they had been
present at new crime scenes. Better yet, why not tag everyone accused of
a crime so that even if we don't get them the first time we could get
them the second time! Or .. I know! Let's tag EVERYBODY! I mean, it's
not like you'd get into trouble unless you did something wrong, citizen.</p>
<p>Hrm. No, that leads down the slippery slope. Far too quickly.</p>
<p>How about inherent security? If you carry the chip in your biceps, you
should be home free? Right? Right?</p>
<p>Wrong. As seen in the article above, there are already methods out there
in the wild to copy RFID chips, including the model used here. So you
get an implant, with all that means of infection risks and what not. And
you go shopping a few weeks later. In the scuffle at the mall, someone
is carrying a RFID scanner/copier - and then promptly produces a
<strong>non-body internal</strong> tag that is (to a scanner) indistinguishable from
yours. And look at that pretty security barrier go. Going ... going ...
gone!</p>
<p>In end-effect, I find it somewhat cool that my childhood dreams and
visions are coming into the world, but I don't think I'd take on a
bodily carried RFID any time soon.</p>
Reading John M. Lee - Introduction to Smooth Manifolds (1 of 1)2006-02-12T23:14:00+01:00Michitag:blog.mikael.johanssons.org,2006-02-12:reading-john-m-lee-introduction-to-smooth-manifolds-1-of-1.html<p>If I'm going to take this renewed interest in trying to understand
differential geometry more seriously, I might as well read more than one
source on it. So, I'll start a sequence of posts on this book as well.</p>
<p>Just as Merkulov, Lee starts with a short definition of what a
topological manifold is, including definitions for the terms needed for
the treatment.</p>
<div class="section" id="definition">
<h2>Definition</h2>
<p>An n-dimensional topological manifold is a second countable Haussdorff
space of local Euclidean dimension n.</p>
<p>Next, Lee goes on to define coordinate charts. I won't repeat the
treatment, since he doesn't really bring anything Merkulov hasn't talked
about in some manner or other. Atlases, smooth structures and
equivalence of atlases also merits some treatment.</p>
<p>The first really new thing I find is the Lemma 1.4. Lee points out that
for a topological manifold M, every smooth atlas is contained in a
unique maximal smooth atlas - this is probably a rather straightforward
application of Zorn's lemma, but Lee gives an explicit construction of
the unique maximal smooth atlas as theatlas of all charts that are
smoothly compatible with every chart in the original atlas. There are
some technicalities to be checked to verify that this is an atlas and
that it is maximal; but it all boils down to "because it's compatible
anyway".</p>
<div class="line-block">
<div class="line">Furthermore, for a second part he affirms that two smooth atlases
determine the same maximal smooth atlas iff their union is a smooth
atlas. The proof of this is left to the reader; but what would my
blogposts be if not the reader doing the things he's supposed to do?
So here goes.</div>
<div class="line-block">
<div class="line">Suppose two smooth atlases determine the same maximal smooth atlas.
Say [tex]\mathcal A[/tex] and [tex]\mathcal B[/tex]. We want to show
that then their union is a smooth atlas. Obviously their union is an
atlas, so we need only verify smoothness. So take some pair of charts
[tex](U_{\mathcal A},\phi_{\mathcal A})[/tex] and
[tex](U_{\mathcal B},\phi_{\mathcal B})[/tex]. We need to show
that on the respective images of [tex]U_{\mathcal A}\cap
U_{\mathcal B}[/tex] the functions [tex]\phi_{\mathcal
A}\circ\phi_{\mathcal B}^{-1}[/tex] and [tex]\phi_{\mathcal
A}\circ\phi_{\mathcal B}^{-1}[/tex] are smooth. So we pick some
arbritrary point [tex]x_0\in\phi_{\mathcal A}(U_{\mathcal
A}\cap U_{\mathcal B})[/tex] and want to show smoothness at this
point.</div>
<div class="line">But we can use the maximal smooth atlas [tex]\mathcal C[/tex] now.
We pick a chart [tex](U_{\mathcal C},\phi_{\mathcal C})[/tex]
such that [tex]x_0\in U_{\mathcal C}[/tex]. Since this chart is
from the maximal smooth atlas, it is compatible with both
[tex]\mathcal A and \mathcal B[/tex]. More precisely, this means
that [tex]\phi_{\mathcal C}\circ\phi_{\mathcal A}^{-1}[/tex] is
smooth. So is also [tex]\phi_{\mathcal B}\circ\phi_{\mathcal
C}^{-1}[/tex]. So if we compose these two functions, we get a new
function. That takes points in [tex]\phi_{\mathcal A}(U_{\mathcal
A}\cap U_{\mathcal B})[/tex] to points in [tex]\phi_{\mathcal
B}(U_{\mathcal A}\cap U_{\mathcal B})[/tex]. And this function
is, as it is the composition of two smooth functions, also smooth.</div>
<div class="line">For a different point we may have to pick a different chart from
[tex]\mathcal C[/tex], but since all charts in [tex]\mathcal C[/tex]
are smoothly compatible with all charts in both [tex]\mathcal A[/tex]
and [tex]\mathcal B[/tex], this won't really change much of the
argument. And thus we know that if both determine the same maximal
atlas then their union is a smooth atlas.</div>
</div>
</div>
<p>Suppose now that the two atlases [tex]\mathcal A[/tex] and
[tex]\mathcal B[/tex] have a smooth atlas as their union. That means
more specifically that all the charts in [tex]\mathcal B[/tex] are
smoothly compatible with all charts in [tex]\mathcal A[/tex], and thus
that [tex]\mathcal B[/tex] is contained in the maximal smooth atlas
containing [tex]\mathcal A[/tex] and vice versa. But since the maximal
smooth atlas containing a specific atlas was unique, they both have the
same maximal smooth atlas.</p>
<p>And this illuminates why the interesting condition Merkulov gave was
precisely that they have a smooth union.</p>
<p>Note that any manifold that can be covered by a single chart thus has a
smooth structure determined in its entirety by that chart.</p>
<p>And at this point - page 11 - Lee does something that I probably will
end up hating him for. He introduces the Einstein Summation Convention.
It's bad. It's ugly. And it stands for most of the things that led to my
flunking differential geometry in the first place. Boooo!</p>
<p>Another few pages yields an idiotic definition and the introduction of
diffeomorphisms without telling the reader what they are. I'll stop
reading this thing now.</p>
</div>
Reading Merkulov: Differential geometry for an algebraist (2 in a series)2006-02-11T23:03:00+01:00Michitag:blog.mikael.johanssons.org,2006-02-11:reading-merkulov-differential-geometry-for-an-algebrist-2-in-a-series.html<p>So, in the last installment, we got to know smooth manifolds and charts,
atlases and some nice topological tricks and tweaks. For this round, we
follow Merkulov onward, and pretty soon stumble across category theory
and sheaves. The notes I'm following here are from the link on
<a class="reference external" href="http://www.math.su.se/~sm">Merkulov's website</a>. It starts, however,
with a nice discussion of temperatures in archipelagos. Go read it - I
imagine I'm almost comprehensible at that part of the text.</p>
<p>A map from a subset of a smooth manifold to [tex]\mathbb R[/tex] is
called a smooth function on the subset if for every [tex]x[/tex] in the
subset and a coordinate chart at [tex]x[/tex], the
[tex]n[/tex]-to-[tex]1[/tex] variable function
[tex]f\circ\phi^{-1}[/tex] is smooth at the point [tex]\phi(x)[/tex].</p>
<p>What this means is that if we have some point on our manifold, we can
take out our chart over that part of the manifold and check out the
function of our coordinates as chained through the chart and the
function from the manifold portion. This can be made even more clear by
taking a look at coastal temperatures. Suppose you're on a boat in the
Stockholm archipelago. You have a location on the earth (which is, for
our purposes, a manifold). There is some function that to each point on
earth assigns a temperature. This is our function. Now, for the area
around Vaxholm, you can take out your chart, and you get coordinates in
terms of something like "135 mm north and 25 mm west on this sheet of my
atlas!". So we have a set of coordinates: (135,25). These coordinates
correspond to a point on the manifold, namely your current position. You
can break out your thermometer and measure the temperature here, and
thus you get the function value at this position. As you move about
around Vaxholm, you get different temperatures, so with this fixed
choice of a chart over your area, you really do get a function
[tex]\mathbb R^2\to\mathbb R[/tex]. The function is smooth if
regardless of where you are or what chart you're using, this function
you get is a smooth function.</p>
<p>The set of all smooth functions on the smooth manifold [tex]M[/tex] form
an algebra over [tex]\mathbb R[/tex], which we denote by [tex]\mathcal
E^\infty(M)[/tex]. Indeed, if [tex]f[/tex] is a smooth function, then
of course [tex]\lambda f[/tex] is a smooth function for real constants
[tex]f[/tex]. And [tex]f+g[/tex] is smooth if [tex]f[/tex] and
[tex]g[/tex] both are smooth. The same holds for the pointwise product
of two smooth functions. So the axioms check out.</p>
<p>Analogously the complex analytic or holomorphic functions on a subset of
a complex manifold are defined; whereby a complex manifold is a real
manifold of even dimension, with [tex]\mathbb R^{2n}[/tex] identified
with [tex]\mathbb C^n[/tex] and the analytic structure comes from
requiring all transformation functions to be complex analytic. It's
either a [tex]2n[/tex]-dimensional manifold or an
[tex]n[/tex]-dimensional complex manifold, depending on how you look at
it.</p>
<p>Now, note that smooth and holomorphic behave very different from each
other. All holomorphic functions on a compact complex manifold are
constant; and even if you have a holomorphic function vanishing on an
open subset of its domain, it vanishes everywhere; but for [tex]\mathbb
R^n[/tex] there are non-trivial smooth functions that vanish outside an
arbitrary open ball.</p>
<p>A map between two manifolds is said to be smooth if for any pair of
relevant coordinate charts, the map from [tex]\mathbb R^n[/tex] to
[tex]\mathbb R^n[/tex] by going from the coordinates up to the first
manifold, with the map to the second and then back to coordinates ends
up being smooth regardless of choice of charts.</p>
<div class="line-block">
<div class="line">A smooth map of manifolds [tex]\psi\colon M_1\to M_2[/tex]
induces in an ordinary manner the <em>pullback map of rings of smooth
functions</em></div>
<div class="line-block">
<div class="line">[tex]\psi_*\colon\mathcal E^\infty(M_2)\to\mathcal
E^\infty(M_1)[/tex] by [tex]\psi_*(f)=f\circ\psi[/tex].</div>
</div>
</div>
<p>Note that every continuous map between smooth manifolds is homotopy
equivalent to a smooth map. Thus, when computing homotopy of a manifold
it is enough to work with equivalence classes of smooth maps
[tex]S^m\to M[/tex], which tends to simplify things.</p>
<div class="section" id="graded-algebraic-structures">
<h2>Graded algebraic structures</h2>
<p>Most interesting algebraic structures can be equipped with a <em>grading</em>.
This decomposes the structure into a direct sum of part-structures where
each partstructure is a representative of the structure in question; and
where each partstructure contains only elements of the same degree -
called homogenous. For instance, any polynomial ring is a graded
structure, by taking the degree of the monomial as grading.</p>
<p>The elements of [tex]\Hom(A, B)[/tex] for graded structures are graded
as well in a natural way - a map being homogenous of degree i if
[tex]|f(v)|=|v|+i[/tex].</p>
<p>For structures with multiplication - i.e. anything more advanced than
modules/vector spaces - we add a sign to the multiplication according to
the Koszul sign convention: any term that interchanges the homogenous
vectors [tex]v[/tex] and [tex]w[/tex] gets the factor
[tex](-1)^{|v||w|}[/tex].</p>
</div>
<div class="section" id="some-examples">
<h2>Some examples</h2>
<div class="line-block">
<div class="line">A [tex]\mathbb Z[/tex]-graded Lie algebra is a [tex]\mathbb
Z[/tex]-graded vector space equipped with a special morphism [tex][ ,
]\in\Hom(V\otimes V,V)[/tex] such that</div>
<div class="line-block">
<div class="line">[tex][v_1,v_2]=-(-1)^{|v_1||v_2|}[v_2,v_1][/tex] and</div>
<div class="line">[tex][[v_1,v_2],v_3]+[[v_2,v_3],v_1]+[[v_3,v_1],v_2]=0[/tex]</div>
<div class="line">for all homogenous [tex]v_1[/tex], [tex]v_2[/tex], [tex]v_3[/tex]
in [tex]V[/tex].</div>
</div>
</div>
<div class="line-block">
<div class="line">A [tex]\mathbb Z[/tex]-graded commutative algebra is a [tex]\mathbb
Z[/tex]-graded vector space equipped with a special morphism
[tex]\mu\in\Hom(V\otimes V,V)[/tex] such that</div>
<div class="line-block">
<div class="line">[tex]\mu(v_1,v_2)=-(-1)^{|v_1||v_2|}\mu(v_2,v_1)[/tex]
and</div>
<div class="line">[tex]\mu(v_1,\mu(v_2,v_3))=\mu(\mu(v_1,v_2),v_3)[/tex]</div>
<div class="line">for all homogenous [tex]v_1[/tex], [tex]v_2[/tex], [tex]v_3[/tex]
in [tex]V[/tex]. We tend to write [tex]v_1v_2[/tex] for
[tex]\mu(v_1,v_2)[/tex].</div>
</div>
</div>
<p>The symmetric tensor algebra [tex]\odot V[/tex] on a [tex]\mathbb
Z[/tex]-graded vector space is the quotient of the tensor algebra
[tex]\otimes V=\bigoplus_{n=0}^\infty \otimes^nV[/tex] by the ideal
generated by all expressions on the form [tex]v_1\otimes
v_2-(-1)^{|v_1||v_2|}v_2\otimes v_1[/tex]. This is a graded
commutative algebra.</p>
<p>The skew-symmetric tensor algebra [tex]\wedge V[/tex] on a
[tex]\mathbb Z[/tex]-graded vector space is the quotient of the tensor
algebra [tex]\otimes V=\bigoplus_{n=0}^\infty \otimes^nV[/tex] by
the ideal generated by all expressions on the form [tex]v_1\otimes
v_2+(-1)^{|v_1||v_2|}v_2\otimes v_1[/tex]. This is a graded
commutative algebra.</p>
<p>Note that [tex]\odot V\cong\wedge V[1][/tex]. More precisely
[tex]\odot^n V=(\wedge^nV[1])[-n][/tex] as graded vector spaces.</p>
</div>
<div class="section" id="exercise">
<h2>Exercise</h2>
<div class="line-block">
<div class="line">Given a [tex]\mathbb Z[/tex]-graded vector space [tex]V[/tex] show
that the structure of a [tex]\mathbb Z[/tex]-graded Lie algebra on
[tex]V[1][/tex] is equivalent to the following data: A degree
[tex]-1[/tex] element [tex][\bullet]\in\Hom(\otimes^2V,V)[/tex]
satisfying</div>
<div class="line-block">
<div class="line">[tex][v_1\bullet
v_2]=(-1)^{|v_1||v_2|+|v_1|+|v_2|}[v_2\bullet
v_1][/tex] and the Jacobi identity.</div>
</div>
</div>
</div>
<div class="section" id="proof">
<h2>Proof</h2>
<div class="line-block">
<div class="line">We take the hint in the notes and let [tex]s:V\to V[1][/tex] be the
natural isomorphism of degree [tex]1[/tex], and let [tex][v_1\bullet
v_2]=s^{-1}[sv_1,sv_2][/tex]. If [tex]v[/tex] has degree
[tex]n[/tex], then [tex]v[/tex] lives in degree [tex]n-1[/tex] in
[tex]V[1][/tex], and so [tex]sv[/tex] has degree [tex]n-1[/tex]. So
[tex][sv_1,sv_2][/tex] has degree [tex]|v_1|+|v_2|-2[/tex] and
[tex]|s^-1[sv_1,sv_2]|=|v_1|+|v_2|-1[/tex].</div>
<div class="line-block">
<div class="line">As our next step, we examine [tex][v_1\bullet v_2][/tex] and its
relationship to [tex][v_2\bullet v_1][/tex]. Thus</div>
<div class="line">[tex][v_1\bullet
v_2]=s^{-1}[sv_1,sv_2]=s^{-1}(-(-1)^{|sv_1||sv_2|}[sv_2,sv_1])[/tex]</div>
<div class="line">and now note that
[tex]|sv_1||sv_2|=(|v_1|+1)(|v_2|+1)=|v_1||v_2|+|v_1|+|v_2|+1[/tex]
so the sign (which other than that just tunnels out through
[tex]s^{-1}[/tex]) will end up yielding the expression</div>
<div class="line">[tex](-1)^{|v_1||v_2|+|v_1|+|v_2|}s^{-1}[sv_2,sv_1][/tex]</div>
<div class="line">which is just</div>
<div class="line">[tex](-1)^{|v_1||v_2|+|v_1|+|v_2|}[v_2\bullet
v_1][/tex].</div>
</div>
</div>
<p>The Jacobian identity is the same. So if we have a Lie structure on
[tex]V[1][/tex] we will get the described structure on [tex]V[/tex]. We
can go through basically the same moves as we did to show that if we
have this structure, we can define a Lie bracket on [tex]V[1][/tex]
which, in endeffect, will yield precisely a Lie structure based in these
axioms.</p>
<div class="line-block">
<div class="line">It turns out that precisely this structure occurs if we take
[tex]V=k[x^1,\dots,x^n,\psi_1,\dots,\psi_n][/tex] and let
[tex]|\psi_a|=|x^a|+1[/tex]. The bracket would be defined as</div>
<div class="line-block">
<div class="line">[tex][f\bullet g]=(-1)^{|f|}\delta(fg)-(-1)^{|f|}(\Delta
f)g-f\Delta g[/tex]</div>
<div class="line">where</div>
<div class="line">[tex]\Delta=\sum_{a=1}^n \frac{\partial^2}{\partial
x^a\partial \psi_a}[/tex]</div>
<div class="line">This bracket can be verified to satisfy the given axioms, and thus
actually defines the structure of graded Lie algebra on
[tex]V[1][/tex].</div>
</div>
</div>
<p>For the next installment, we start with section 3, which discusses
categories. Of course, categories are well known to the person writing
this, and thus I will not talk much about them, but skip on to the meaty
stuff behind them.</p>
</div>
Reading Merkulov: Differential geometry for an algebraist (1 in a series)2006-02-11T14:23:00+01:00Michitag:blog.mikael.johanssons.org,2006-02-11:reading-merkulov-differential-geometry-for-an-algebraist.html<p>I'll do this in posts and not pages on further thought...</p>
<p>Sergei Merkulov at Stockholm University gives during the spring 2006 a
course in differential geometry, geared towards the algebra graduate
students at the department. The course was planned while I was still
there, and so I follow it from afar, reading the <a class="reference external" href="http://www.math.su.se/~sm">lecture
notes</a>he produces.</p>
<p>At this page, which will be updated as I progress, I will establish my
own set of notes, sketching at the definitions and examples Merkulov
brings, and working out the steps he omits.</p>
<div class="section" id="familiar-parts-in-unfamiliar-language">
<h2>Familiar parts in unfamiliar language</h2>
<p>Merkulov begins the paper by introducing in swift terms the familiar
definitions from topology of topology, continuity, homeomorphisms,
homotopy, and then goes on to discuss homotopy groups, and thereby
introducing new names for things I already knew. Thus, I give you, for a
pointed topological space [tex]M[/tex]</p>
<div class="section" id="the-space-of-loops-based-at-tex-x-0-tex-tex-omega-x-0-m-tex">
<h3>The space of loops based at [tex]x_0[/tex]: [tex]\Omega_{x_0}M[/tex]</h3>
<p>This is the quite normal set of all possible loops in [tex]M[/tex] as
used when defining the fundamental group. My guess, however, is that the
terminology here used also ties this familiar object into the loop
spaces that people study.</p>
</div>
<div class="section" id="the-pontryagin-product">
<h3>The Pontryagin product</h3>
<p>This is the normal "First one loop, then the other" product of elements
in [tex]\Omega_{x_0}M[/tex]. I didn't know it carried that name
though.</p>
<p>Using these new names, the fundamental group of a space [tex]M[/tex] is
just [tex](\Omega_{x_0}M/\sim,*)[/tex] with homotopy equivalence
for [tex]\sim [/tex]and the Pontryagin product for [tex]*[/tex].</p>
<p>These definitions are <em>analogously</em> extended to embeddings of
[tex]S^n[/tex] instead of loops to form the higherdimensional homotopy
groups. It's not clear to me, however (though probably most due to the
time past since I read about homotopy groups closely) how to just extend
the Pontryagin product to hyper-spheres. So you have two hyper-spheres
that touch in the basepoint of the space. Hmmmm. If you remove that
point, you end up with two punctured spheres (homeomorphic to disks)
that stretch into the basepoint. So you glue the spheres together in the
basepoint that you reinsert. So you get basically a very narrow
hourglass. And this is homotopic to a sphere again... Yeah, I think I
can visualize that.</p>
<p>A homeomorphism [tex]f[/tex] that takes [tex]M_1[/tex] to
[tex]M_2[/tex] will also be a homeomorphism from [tex]M_1\setminus
A[/tex] to [tex]M_2\setminus f(A)[/tex], since precisely those bits
are removed which would mess with bijectivity, and continuity is
preserved due to the set difference having the induced topology. This
can be used together with the result (to be proven) that
[tex]\pi_m(S^n)=0[/tex] for [tex]m Indeed, should there be such a
homeomorphism, then that homeomorhpism would with this result give a
homeomorphism [tex]\mathbb R^m\setminus 0[/tex] to [tex]\mathbb
R^n\setminus f(0)[/tex]. Both of these spaces are punctured euclidean
spaces, and therefore homotopic equivalent to the corresponding spheres
and thus have the same homotopy groups as the spheres do. But we know
that homeomorphic spaces have equal homotopy groups, and that spheres of
different dimensionality have different homotopy groups (since
[tex]\pi_m(S^n)=0[/tex] but [tex]\pi_m(S^m)\neq 0[/tex]) and thus
the homeomorphism cannot exist.</p>
</div>
</div>
<div class="section" id="and-the-things-i-didn-t-know-before">
<h2>And the things I didn't know before</h2>
<p>A <em>diffeomorphism</em> is a homeomorphism such that the homeomorphism and
its inverse both are smooth. We need open sets in real vectorspaces as
domain and codomain for this to work as defined. Diffeomorphisms come in
classes: [tex]C^r[/tex] - which just denote how many times you can
differentiate both the homeomorphism and its inverse.</p>
<p>A slightly weaker concept is that of <em>local diffeomorphism</em> - which is a
map from an open set to an open set, as with diffeomorphisms, such that
for each point there is an open neighbourhood such that the map is a
homeomorphism of that neighbourhood onto its image.</p>
<p>Maps can be tested for local diffeomorphism status by calculating their
Jacobian - the determinant of the matrix [tex]\left(\frac{\partial
f^i}{\partial x^j}\right)[/tex]. When the matrix degenerates, that is
when the determinant is zero, at some point, then the map fails to be a
diffeomorphism.</p>
<div class="section" id="exercise-example">
<h3>Exercise / Example</h3>
<p>The map [tex]f:\mathbb R\to\mathbb R[/tex] given by
[tex]f(x)=x^3[/tex] is not a diffeomorphism. Indeed, the Jacobian is
[tex]3x^2[/tex], which is [tex]0[/tex] for [tex]x=0[/tex].</p>
<div class="section" id="manifolds">
<h4>Manifolds</h4>
<p>A <em>covering</em> is a family of open sets such that the union of the sets
contain the space.</p>
<p>A covering is <em>locally finite</em> if every point has an open neighbourhood
which intersects only finitely many covering sets.</p>
<p>A space is <em>paracompact</em> if every covering has a locally finite
subcovering. A space is <em>compact</em> if every covering has a finite
subcovering.</p>
<p>A space if an <em>[tex]n[/tex]-dimensional topological manifold</em> if it is
Haussdorff and every point has a neighbourhood homeomorphic to an open
subset of [tex]\mathbb R^n[/tex].</p>
</div>
</div>
<div class="section" id="examples">
<h3>Examples</h3>
<p>The circle is a 1-manifold. So are many algebraic curves. However, not
necessarily all 1-dimensional algebraic varieties - for instance the
variety generated by the equation [tex]xy[/tex] fails, since every
neighbourhood of [tex](0,0)[/tex] by necessity has to be a cross-shape,
which in no way is an open subset of any [tex]\mathbb R^n[/tex] and
definitely not of [tex]\mathbb R[/tex].</p>
<p>A <em>chart</em> at a point [tex]x_0[/tex] of a topological manifold
[tex]M[/tex] is a pair [tex](U,\phi)[/tex] consisting of an open
neighbourhood [tex]U[/tex] of [tex]x_0[/tex] and a homeomorphism
[tex]\phi:U\to V[/tex] to an open set [tex]V\subset\mathbb
R^n[/tex]. The actual values in the image of a point are called the
coordinates of the point. An atlas is a family of charts covering the
space.</p>
<p>An example. Take the circle, embedded in [tex]\mathbb R^2[/tex] as the
unit circle. I shall construct a few atlases on the circle and compare
them.</p>
<p>For the first one, let the atlas be defined on four open (in the induced
topology) subsets of the circle - namely [tex]x>0[/tex], [tex]x<0[/tex],
[tex]y>0[/tex] and [tex]y<0[/tex] being the defining relations for each
of the four. Each such subset is a halfcircle, excluding the endpoints.
Furthermore, for the open subset defined by [tex]y>0[/tex] let a map to
[tex]\mathbb R[/tex] be given as [tex](x,y)\mapsto x[/tex].</p>
<p>First off: Is this a homeomorphism? It is a continuous map, quite
clearly. Furthermore, it is a bijection onto the open subset
[tex](-1,1)[/tex] in [tex]\mathbb R[/tex]. And the inverse, given by
the function [tex]x\mapsto (x,\sqrt{1-x^2})[/tex] is continuous as
well. Thus it is, indeed, a homeomorphism.</p>
<p>To get the other four charts, change x for y, and twiddle the sign of
the square root in the inverse to choose which half-circle you want to
get to. Everything checks out, mutatis mutandi.</p>
<p>For the second one, take the same partitions, but now use the positive
radian distance to the x-axis instead. So the half-circle given by
[tex]y>0[/tex] turns out to be mapped homeomorphically to
[tex](0,\pi)[/tex], [tex]x<0[/tex] maps to [tex](\pi/2,3\pi/2)[/tex],
[tex]y<0[/tex] maps to [tex](\pi,2\pi)[/tex] and x>0 maps to
[tex](3\pi/2,5\pi/2)[/tex].</p>
<p>For a pair of charts that overlap, every point in the intersection has
two sets of coordinates - one from each chart. Say the charts are
[tex](U_\alpha,\phi_\alpha)[/tex] and
[tex](U_\beta,\phi_\beta)[/tex]. Then one set of coordinates are
the coordinates of [tex]\phi_\alpha(x)=\{x_\alpha^i\}_{1\leq
i\leq n}[/tex] and the other set of coordinates are the coordinates of
[tex]\phi_\beta(x)=\{x_\beta^j\}_{1\leq j\leq n}[/tex]. Thus
for every pair of overlapping charts, we get a family of [tex]n[/tex]
functions [tex]f^i_{\alpha\beta}:\mathbb R^n\to\mathbb R[/tex],
each taking the point [tex]\phi_\alpha(x)[/tex] to the
[tex]i[/tex]:th coordinate in [tex]\phi_\beta(x)[/tex].</p>
<p>An atlas is said to be a <em>smooth atlas</em> if the transformation maps
[tex]f_{\alpha\beta}[/tex] for each pair of overlapping charts is
smooth.</p>
<p>For our examples, the transformation maps are as follows:</p>
<div class="line-block">
<div class="line">For the projection maps (the first example), the overlaps are in each
quadrant. I'll work through the first quadrant, leaving the rest to be
done equivalently. The point [tex](x,y)[/tex] is mapped to
[tex]x[/tex] and [tex]y[/tex] respectively in the two charts. So if we
go from [tex](0,1)[/tex] to [tex](0,1)[/tex] from the
[tex]x[/tex]-axis up and then over to the [tex]y[/tex]-axis, we would
end up with the following map:</div>
<div class="line-block">
<div class="line">[tex]x\mapsto(x,\sqrt(1-x^2))\mapsto\sqrt(1-x^2)[/tex]</div>
<div class="line">This is indeed a smooth map, and all other maps are on the same form;
so we have a smooth atlas.</div>
</div>
</div>
<div class="line-block">
<div class="line">For the radian distance atlas, we again take this quadrant as a
comparison points, but for a different reason. In all other quadrants,
the conversion map is simply the identity, but here something
nontrivial occurs.</div>
<div class="line-block">
<div class="line">We start with the quadrant as a part of the [tex]y>0[/tex] chart and
want to convert this to a point in the [tex]x>0[/tex] chart. So we
have a point [tex](x,y)[/tex] on the circle. This point has radian
distance [tex]\varphi[/tex] from the [tex]x[/tex]-axis. So our map is
[tex](0,\pi/2)\to(2\pi,5\pi/2)[/tex] by</div>
<div class="line">[tex]\varphi\mapsto(x,y)\mapsto\varphi+2\pi[/tex]</div>
<div class="line">due to the way the [tex]x>0[/tex] chart is built.</div>
</div>
</div>
<p>Two smooth atlases are <em>equivalent</em> if their union is a smooth atlas. In
this entire treatment, smooth can be replaced with <em>analytic</em>, <em>class
[tex]C^r[/tex]</em> et.c. to yield a theory for such atlases.</p>
<div class="line-block">
<div class="line">So. Are my two atlases equivalent? The union is the atlas with charts
all defined on four half circles, but with different transfers. Within
each atlas, we can step back and forth smoothly between charts. So if
I, again, treat one transfer from one halfcircle with the projection
chart to the same halfcircle with the radial distance chart, that
transfer can be extended analogously to the rest of the charts. For
non-identical domains, we can always first step over using identical
domains and then use the atlas-internal transfer to get where we want
to go.</div>
<div class="line-block">
<div class="line">So, we take, again [tex]y>0[/tex] as our workspace. We have to
charts, one which sends it to [tex](-1,1)[/tex] and one which sends it
to [tex](0,\pi)[/tex]. So we start with the point [tex]\varphi[/tex]
in [tex](0,\pi)[/tex]. This goes to
[tex](\cos(\varphi),\sin(\varphi))[/tex], and then is sent to
[tex]\cos(\varphi)[/tex]. So that's smooth. For the other possible
order, we go from [tex](-1,1)[/tex] to [tex](0,\pi)[/tex] sending
[tex]x[/tex] to [tex]\arccos(x)[/tex]. So everything checks out fine,
and my atlases are equivalent.</div>
</div>
</div>
<p>Now. Is this an equivalence class? Reflexivity and commutativity hold
straight off. The remaining question is that if the atlas
[tex]\alpha[/tex] is equivalent to the atlas [tex]\beta[/tex], and
[tex]\beta[/tex] with [tex]\gamma[/tex], are then [tex]\alpha[/tex]
and [tex]\gamma[/tex] equivalent? For this to hold, [tex]\alpha[/tex]
and [tex]\gamma[/tex] need to be compatible. But the transition from
[tex]\alpha[/tex] to [tex]\gamma[/tex] could be made via the transfer
functions from [tex]\beta[/tex] while ignoring [tex]\beta[/tex] as an
atlas. So since compositions keep smoothness intact, we're home free.</p>
<p>Thus, we can form equivalence classes of atlases. A manifold with such
an equivalence class is said to have a smooth (real analytic, class C^r)
structure.</p>
<p>This concludes sections up to 1.8 of the notes. More to come later.</p>
</div>
</div>
New blog found!2006-02-10T15:08:00+01:00Michitag:blog.mikael.johanssons.org,2006-02-10:new-blog-found.html<p>Stumbling across the blog
<a class="reference external" href="http://epsilondelta.wordpress.com">Epsilon-Delta</a> with a brilliant
article series on mathematics as key to effective programming. This in
itself merits a closer look at the blog; which in turn merits it a place
in my blogroll. Go and read it you too!</p>
Borsuk-Ulam and West Wing2006-02-07T23:56:00+01:00Michitag:blog.mikael.johanssons.org,2006-02-07:borsuk-ulam-and-west-wing.html<p>In West Wing 4x20, CJ states that there are two antipodal points with
identical temperature on the earth, as an argument why it should be
possible to imagine that an egg could stand on its end at the spring
equinox. This particular plotline also has her most emphatically
claiming that this should not be possible at the autumn equinox. Why
this particular physics is complete idiocy will be left as an exercise
to the interested reader, and instead I will focus on the other claim.</p>
<p>This is, in fact, true. It's a corollary to one of the prettiest theorem
conglomerates I have ever seen: the Borsuk-Ulam theorem(s). Alas, I
haven't got my sources on it here at the moment, so I won't give you the
deep indepth survey I want to give; but I do want to give a bit of
overview as to why the claim CJ supports her insane theory with is
actually true.</p>
<p>Temperature can be seen as a function from location to the thermometer.
For each place on the earth, there is a temperature rating. Furthermore,
this function is continuous, since there are no discontinuities - no
sudden jumps in temperature between close points.</p>
<p>This sudden jump thing merits closer explanation. It says that by
measuring closer and closer together, you can get the difference in
temperature as small as you want to. It doesn't prevent steep
temperature shifts, it only prevents insane. This assumption, as such,
is a valid one since temperature differences tend to flatten themselves
out - if you place a hot and a cold bit close to each other, the cold
one heats and the hot chills.</p>
<p>So, we know that it's continuous. This, it turns out, is an incredibly
powerful to know. It brings in the entire toolbox of topology. Which, in
turn, brings us closer to the topic of the post - the Borsuk-Ulam
theorem.</p>
<p>The theorem has a million different equivalent statements. Borsuk
himself proved that for any <em>antipodal</em> function [tex]f[/tex] from the
sphere to itself (meaning that [tex]f(-x)=-f(x)[/tex]) has <em>odd degree</em>.
I won't enter into what this means more precisely, but it has a few cool
corollaries:</p>
<ol class="arabic simple">
<li>Any family of [tex]n+1[/tex] closed sets covering [tex]S^n[/tex] has
a member containing an antipodal pair.</li>
<li>Any map from the sphere to the plane must send some antipodal pair to
the same point</li>
</ol>
<p>1 has a few fun interpretations. I will feed you with those when I get
hold of my material again, or actually, you know, think of them.</p>
<p>2, on the other hand, is what we need to prove CJs statement. Our
temperature function [tex]t[/tex] can be used to construct a function
from the sphere to the plane by sending a point [tex]P[/tex] on the
earth to the point [tex](t(P),t(P))[/tex]. This function, in turn, has
an antipodal fixpoint, so there are some points [tex]P[/tex] and
[tex]-P[/tex] such that [tex](t(P),t(P))=(t(-P),t(-P))[/tex]. But this
also means [tex]t(P)=t(-P)[/tex], so there are some pair of antipodal
points on the earth with the same temperature.</p>
<p>Thanks to <a class="reference external" href="mailto:nerdy2@#math">nerdy2@#math</a>:ircnet for reminding me of the details of
Borsuk-Ulam, and giving the construction of [tex](t(P),t(P))[/tex].</p>
<p><em>Edit</em>: It was pointed out to me later that all this is really
unnecessary. In fact, we can use Borsuk-Ulam straight off to show that
on each great circle on earth, there is such an antipodal pair. Indeed,
a great circle is [tex]S^1[/tex], and the temperature map takes
[tex]S^1[/tex] to [tex]\mathbb R[/tex], and thus qualifies for the
setup for 2 above, since the dimensions of "sphere" and "plane" don't
really matter. Thus, on each great circle there is an antipodal pair
with equal temperature. And since there are many, many great circle's on
earth, there are also many, many such points.</p>
Introduction to Algebraic Topology and related topics (I)2006-01-01T00:20:00+01:00Michitag:blog.mikael.johanssons.org,2006-01-01:introduction-to-algebraic-topology-and-related-topics-i.html<p>In the spirit of writing some sort of introductory posts to the things
related to what I'm about to spend several years thinking and writing
about, I thought I'd try to make a (more or less) layman friendly
introduction to Homology and Homotopy.</p>
<p>It's all residing in the realm of Topology. Topology is the field of
mathematics, where those aspects of a shape not dependent on distances
are studied. Thus rigidity is not interesting, whereas connectivity is.
Narrow/thick is not interesting, but what kind of holes the surface has
is. The ultimate thing to be said in topology about two objects is that
they are <em>homeomorphic</em>, which technically means that there is an
isomorphism between the objects in the category of topological spaces;
and more comprehensibly means that there are continuous functions
between the shapes such that they are each others inverses.</p>
<p>By a continuous function, we mean a function such that it respects the
<em>topology</em>. By a topology, we take the collection of all possible open
sets. (This makes for a generalisation of the usual "open intervals" and
its ilk that makes things work out neatly)</p>
<p>Now, proving things to be homeomorphic is Really, Really Difficult.
Proving things not to be homeomorphic isn't really very easy either.
Thus most mathematicians by now have given up on searching for direct
proofs, and instead study invariants. If we can formulate some sort of
associated entity in such a manner that whenever our topological spaces
are homeomorphic, our entities are equal, then we can use that to
conclude that whenever we have two spaces with different entities they
can not possibly be homeomorphic.</p>
<p>For the sake of exposition I will stay within the realm of shapes within
a plane for a while. Take for instance a circular disc and a circular
ring - i.e. the disc with some subdisk removed. Intuitively, these are
completely different shapes. But how would one show that they have to
be? Here enters one of our first useful invariants. We can take paths
within the space. Technically, these are functions from a closed
intervals to the shape - but it's easiest to just view them as curves
traced with a pencil; or lengths of string positioned inside the shape.
For a shape, we start out by simply picking a point anywhere on the
shape, and look at curves that all begin and end in that point. We can
form a new curve from any two such curves by simply continuing along the
second once we've traced the first. Thus we get a composition of curves.</p>
<p>Next, we say that two curves are equivalent if we can nudge the lengths
of string bit by bit, staying within the space all the time, until one
lies on top of the other. If we look at all equivalent curves as one
single entity, we still are able to <em>multiply</em> curves by putting one
after the other. If we have a curve, and then continue along the same
curve, but backwards, we get a shape that can be contracted along its
length to vanish into the point at the beginning and end.</p>
<p>For those who have seen just a tiny bit of algebra, this is familiar. We
have a set of objects - the curves - such that any two objects form a
new object and there is some object that doesn't change anything
(adjoining the curve that doesn't go anywhere but rather stays at the
point we picked won't change the curve we adjoin it to) and finally for
any object there is some other object that 'cancels' it out. This is a
group!</p>
<p>So we can attach a group to any topological space. This group is called
the fundamental group and written π<sub>1</sub>. Now, by some thought, it
can be shown that if the spaces are homeomorphic, then they will have
isomorphic fundamental groups, so this group is one of the first
topological invariants we find. There are several others that are easier
to find - for instance a couple of connectivity attributes. If we just
count the number of disjoint components our space decomposes into, it's
pretty obvious that if this number differs, then the spaces aren't the
same.</p>
<p>So... What is this π<sub>1</sub> for different more or less wellknown
things? For the disc and the circular ring above, we can calculate it
rather easily. Within the disc, the strings can always be nudged
anywhere - there isn't anyhting within it that can obstruct such
movements. So the fundamental group is trivial: any string can be
retracted into just a point. On the ring, on the other hand, we can get
stuck around the whole with our string. Furthermore, if we wind one
string 2 times around the hole, and another 3 times around the hole, we
cannot get these strings to each other. For an experiment, take a coffee
cup, or some other nearby source of a ring shape with a single hole, and
take a length of string, wind it 3 times through the hole, and tie it up
to a loop. Now try to - without breaking the string or the knot - modify
it until it only is winded 2 times. It's impossible.</p>
<p>Thus, each element in the fundamental group of the circular ring can be
characterised by the number of times we wind through, and in what
direction. Any integer works, and by winding first 2 then 3 we end up
winded 5=2+3 times, so in fact we can show this group to be isomorphic
to [tex]\mathbb Z[/tex], the group of integers under addition.</p>
<p>If we can do this for strings tied in loops, it should be possible to do
similar things with say sheets tied into spheres or other
higherdimensional similar things. The field dealing with this line of
inquiry is called Homotopy - and has the fundamental problem that once
we leave the fundamental group, things get REALLY messy. Not very much
is known, even today, and research in the area is very active.</p>
<p><em>August 16, 2007: This post only generates spammy comments nowadays. I
have disabled commenting on this post.</em></p>
Some math-y jokes2006-01-01T00:00:00+01:00Michitag:blog.mikael.johanssons.org,2006-01-01:some-math-y-jokes.html<p>You may have seen some of these before...</p>
<p>[tex]\frac{1\cancel{6}}{\cancel{6}4}=\frac{1}{4}[/tex]</p>
<p>[tex]\frac{1\cancel{9}}{\cancel{9}5}=\frac{1}{5}[/tex]</p>
<p>[tex]\frac{\partial}{\partial x}\sin^2(x)=\sin(2x)[/tex] -- just
move down the 2.</p>
Merlots chokladtårta2005-12-31T21:31:00+01:00Michitag:blog.mikael.johanssons.org,2005-12-31:merlots-chokladtarta-2.html<div class="line-block">
<div class="line">Botten:</div>
<div class="line-block">
<div class="line">100g hasselnötter</div>
<div class="line">100g Marie chokladkex</div>
<div class="line">1 msk kakao</div>
<div class="line">3 tsk farinsocker</div>
<div class="line">(1 msk rom)</div>
<div class="line">30g smält smör</div>
<div class="line">40g smält mörk choklad</div>
</div>
</div>
<div class="line-block">
<div class="line">Fyllning:</div>
<div class="line-block">
<div class="line">200g mörk choklad</div>
<div class="line">30g smör</div>
<div class="line">1 1/4 dl ovispad vispgrädde</div>
<div class="line">3 äggulor</div>
<div class="line">2 ½ dl vispad vispgrädde</div>
</div>
</div>
<p>Tillbehör: Färska frukter och bär</p>
<div class="line-block">
<div class="line">Samtliga botteningredienser mixas i en matberedare till en smidig deg.</div>
<div class="line-block">
<div class="line">Tryck ut i en pajform, ca 24cm diameter, tryck upp på sidorna.</div>
<div class="line">Ställ kallt tills degen är fast.</div>
<div class="line">Smält choklad, smör och grädde i kastrull på låg värme och rör till
en blank och smidig smet.</div>
<div class="line">Ta den av värmen och tillsätt äggulorna en i sänder under kraftig
vispning.</div>
<div class="line">Låt kallna och vänd sist ned den uppvispade grädden med en
metallsked.</div>
<div class="line">Slå smeten i kakbotten och ställ kall tills fyllningen stelnat.</div>
</div>
</div>
The role of almonds in Swedish marital trends2005-12-30T21:35:00+01:00Michitag:blog.mikael.johanssons.org,2005-12-30:the-role-of-almonds-in-swedish-marital-trends.html<div class="section" id="id1">
<h2>The role of almonds in Swedish marital trends</h2>
<div align="right"><p>Mikael Johansson, Susanne Vejdemo</p>
</div><div class="section" id="abstract">
<h3>Abstract</h3>
<div class="line-block">
<div class="line">In this paper, we investigate the Swedish folkloristic belief that
almonds in the Christmas rice porridge will lead to marriage. We offer
a falsification of this hypothesis.</div>
<div class="line-block">
<div class="line"><strong>Keywords:</strong> Christmas traditions, almonds, porridge, marriage</div>
</div>
</div>
</div>
<div class="section" id="introduction">
<h3>Introduction</h3>
<div class="line-block">
<div class="line">One old folk tradition in Sweden is to eat rice porridge on Christmas
Eve. Normally, the porridge is served with a single shelled almond
stirred into the porridge. Whoever bites into the almond is supposed
to make up a rhyme immediately, and also is believed to marry soon –
sometimes said to be before the year ends, or during the coming year.</div>
<div class="line-block">
<div class="line">In [1], an information page from Skansen, a renowned open air museum
focusing on Swedish culture and traditions, the following is said
about this belief:</div>
<div class="line">“Förr i tiden la man ofta en mandel i gröten och rörde om så att den
blev gömd i den fina gröten. Den som fick mandeln sa man skulle bli
gift under året.”</div>
<div class="line">”In earlier times, an almond was often put in the porridge and
stirred in so that it would be hidden in the nice porridge. Whoever
got the almond was said to be married during the year” (translation by
the authors)</div>
</div>
</div>
</div>
<div class="section" id="method">
<h3>Method</h3>
<p>This paper is a quantitive explanative study in which no new data will
be presented. We strive to falsify the hypothesis that almonds eaten in
porridge at Christmas correlates to marriage in the following year. We
judge the hypothesis to be falsified if fewer than 50% of the consumed
almonds could have triggered a marriage.</p>
</div>
<div class="section" id="data">
<h3>Data</h3>
<div class="line-block">
<div class="line">[3] gives the following population data for the number of families in
Sweden:</div>
<div class="line-block">
<div class="line">Total number of households: between 3.8 and 4.8 million, ranging
through the different methods used</div>
<div class="line">Total number of marriages: between 31 598 and 56 559 during the
period 1950-2003</div>
<div class="line">The Swedish Church publishes some statistics for marriage frequencies
as well. [4] gives the following data for church-officiated marriages
in numbers and as a proportion of the total number of marriages. These
tables cohere to the data given in [3] – with error margins well under
5‰ between the two sources. Furthermore, the maximum since 1970 lies
in 1989 according to this data, with a grand total of 110 223
marriages, an outlier that does not occur in the data in [3].</div>
</div>
</div>
</div>
<div class="section" id="discussion">
<h3>Discussion</h3>
<div class="line-block">
<div class="line">There are approximately 4 million families in Sweden (see [3]). Some
of these might fall outside the cultural context, and some of these
will celebrate Christmas together (grandparents celebrating with their
grown children and their families would in the statistics be counted
as several families et.c.), which leads us to reducing the almond
estimate to 1 million. This is a very rough estimate, but we judge
that we have erred on the side of caution, especially as some families
use several almonds during Christmas Eve (personal experience).</div>
<div class="line-block">
<div class="line">If the were no other forces involved in causing marriages, this would
lead to 500 000 marriages each year. Our data gives the actual number
as less than 56 559, or less than approx. 10% of the expected density.
Even the outlier in the church data [4] doesn’t grow higher than
approx. 20% of our expected marital density.</div>
<div class="line">We thus consider the hypothesis falsified.</div>
</div>
</div>
<div class="section" id="sources">
<h4>Sources</h4>
<div class="line-block">
<div class="line">Skansen is selfcharacterized by [2] “Skansen ska verka för att öka
kunskapen om och stärka engagemanget för kultur- och naturarvet” –
”Skansen shall work to increase knowledge about and strengthen the
interest in the cultural and natural heritage” (translated by the
authors)</div>
<div class="line-block">
<div class="line">Over the years, Skansen has accumulated a wealth of Swedish cultural
information and is judged to be a credible source for statements about
widespread Swedish beliefs.</div>
<div class="line">Statistiska Centralbyrån is the governmental agency responsible for
collecting and analyzing statistical data. Their web page can be found
at <a class="reference external" href="http://scb.se">http://scb.se</a></div>
</div>
</div>
</div>
</div>
<div class="section" id="further-research">
<h3>Further research</h3>
<p>One factor we have not looked into in this analysis is the influence of
other folkloristic marital triggers. What happens if you get more than
one almond during a christmas? What happens if you get the almond as
well as eat a slice of cake that was served standing up? This kind of
further analysis would require a deeper approach than this paper could
accomodate.</p>
</div>
<div class="section" id="bibliography">
<h3>Bibliography</h3>
<div class="line-block">
<div class="line">[1] <a class="reference external" href="http://www.skansen.se/pages/?ID=439">http://www.skansen.se/pages/?ID=439</a></div>
<div class="line-block">
<div class="line">[2] <a class="reference external" href="http://www.skansen.se/pages/?ID=381">http://www.skansen.se/pages/?ID=381</a></div>
<div class="line">[3] Statistiska Centralbyrån, Statistisk Årsbok 2005.</div>
<div class="line">[4] <a class="reference external" href="http://www.svenskakyrkan.se/statistik/pdf/kyrklighandling.pdf">http://www.svenskakyrkan.se/statistik/pdf/kyrklighandling.pdf</a></div>
</div>
</div>
</div>
</div>
Calligraphy before christmas2005-12-23T20:52:00+01:00Michitag:blog.mikael.johanssons.org,2005-12-23:calligraphy-before-christmas.html<p>I sat down and started my calligraphy again recently. A few of the
things I've done have not been saved for posteriority, but a few of them
have been scanned and are available for display.</p>
<p>All images are clickable for the full version.</p>
<div class="line-block">
<div class="line">First, a sampling of the inks I had accessible at the time.</div>
<div class="line-block">
<div class="line">`
<img alt="Calligraphy sample 1" src="http://blog.mikael.johanssons.org/wp-content/calligraphy_20051223_1.png" />
<<a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/calligraphy_20051223_1.png">http://blog.mikael.johanssons.org/wp-content/calligraphy_20051223_1.png</a>>`__</div>
</div>
</div>
<div class="line-block">
<div class="line">Then, a christmas gift poem dedicated to my fiancée's cousin</div>
<div class="line-block">
<div class="line">`
<img alt="Calligraphy sample 2" src="http://blog.mikael.johanssons.org/wp-content/calligraphy_20051223_2.png" />
<<a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/calligraphy_20051223_2.png">http://blog.mikael.johanssons.org/wp-content/calligraphy_20051223_2.png</a>>`__</div>
</div>
</div>
<div class="line-block">
<div class="line">And finally, a christmas gift poem dedicated to her younger brother</div>
<div class="line-block">
<div class="line">`
<img alt="Calligraphy sample 3" src="http://blog.mikael.johanssons.org/wp-content/calligraphy_20051223_3.png" />
<<a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/calligraphy_20051223_3.png">http://blog.mikael.johanssons.org/wp-content/calligraphy_20051223_3.png</a>>`__</div>
</div>
</div>
<p>These images were extracted from the pdf file produced by the scanner.
For high-resolution images, please check the <a class="reference external" href="http://blog.mikael.johanssons.org/wp-content/calligraphy_20051223.pdf">original
scan</a>.</p>
EU Parliament clubbing Big Brother light2005-12-15T12:33:00+01:00Michitag:blog.mikael.johanssons.org,2005-12-15:eu-parliament-clubbing-big-brother-light.html<p>Since roughly september, a resolution has been making its way through
the EU bureaucracy to institute mandatory storage times for, among other
things, internet traffic logs with ISPs. Throughout the discussions, the
image has been coming through that the resolution would in endeffect
require ISPs to log more or less everything a user does, requiring
insane disk volumes for the logs and infringing exceedingly on personal
privacy.</p>
<p>The resolution, as it ended up, is actually less panicky than it could
have been - somewhat surprisingly. I'm reading the changes instituted by
the parliament during the first reading and acceptance of the
resolution. They include addition of, among other things, the following
text blocks</p>
<blockquote>
In particular when retaining data related to Internet e-mail and
Internet Telephony, the scope may be limited to the providers' own
services or the network providers'.</blockquote>
<p>making the ISP responsible for their own services, but not for
connectiontracking outside their own services.</p>
<blockquote>
[...]attacks against information systems provides that the
intentional illegal access to information systems, including to data
retained therein, shall be made punishable as a criminal offence.</blockquote>
<p>punishing computer intrusion has been in the loop for quite some time.
Nothing really extraordinary here, unless the scope of "intentional
illegal access to information systems" suddenly widens.</p>
<blockquote>
Considering that the obligations on providers of electronic
communications services should be proportionate, the Directive
requires that they only retain such data which are generated or
processed in the process of supplying their communications services;
to the extent that such data is not generated or processed by those
providers, there can be no obligation to retain it.</blockquote>
<p>This means that if the ISP doesn't log, it has no obligation to retain
the logs they don't have. ONLY the logs that the ISP makes anyway are
under storage obligation for 6-24 months, and with judiciary request
necessary for mandatory disclosure.</p>
<p>Furthermore, it is reminded in the resolution that a ruling by the
European Court on Human Rights</p>
<blockquote>
requires that interference by public authorities with privacy rights
must respond to requirements of necessity and proportionality and
must therefore serve specified, explicit and legitimate purposes and
be exercised in a manner which is adequate, relevant and not
excessive in relation to the purpose of the interference.</blockquote>
<p>Thus, disclosure of logs may only be forced when this is an adequate,
relevant and not excessive measure in comparison to whatever is being
investigated.</p>
<p>The motivation of the directive was rewritten to replace the reference
"serious criminal offences, such as terrorism or organized crime" with
"serious criminal offences, as defined by each Member State in its
national law".</p>
<p>All in all, the text seems, upon first cursory reading, to be less
dangerous than it could have ended up. Most of the edits performed by
the EP in plenum go toward more respect for individual privacy, and
slightly away from the Big Brother scenario.</p>
<p>However, and this is always a major however, the leniency of this
resolution, and of the implementations coming in the member states
during the coming 18 months, is extremely dependent on whether my
reading will be retained, or a harsher reading will be introduced. The
terrorism language has been taken out of the resolution, guarding
against at least some of the hyperreactions we'd await, but still the
signal sent is one that may be abused. The way that the anti-piracy
lobby screams bloody murder would make me wonder if they would not try
to pronounce use of DC or eMule to be a "serious criminal offence" and
start bullying ISPs with this as backup as well.</p>
<p>We need a counterlobby. A sensible one.</p>
New directions!2005-12-06T18:02:00+01:00Michitag:blog.mikael.johanssons.org,2005-12-06:new-directions.html<p>The byline is probably the most basic motivation for the blog. I want a
place to write where I can expound upon themes I don't write in my
LiveJournal - where I write articles rather than notes, where I publish
creations I happened to make (I've started calligraphy - expect to see
scans coming) and which I can customize to a higher degree. (Anybody
tried to write mathematics in LJ? With the LaTeX plugins to Wordpress
blogging maths goes like a dream!)</p>
<p>I hope I will also actually gain a reader or two. Probably should get a
technorati tagging plugin running to get this off its feet too - but
that'll have to wait a bit.</p>
Thettes Kolakakor2005-05-04T15:54:00+02:00Michitag:blog.mikael.johanssons.org,2005-05-04:thettes-kolakakor.html<div class="line-block">
<div class="line">300g smör</div>
<div class="line-block">
<div class="line">3dl socker</div>
<div class="line">3msk vaniljsocker</div>
<div class="line">3msk sirap</div>
<div class="line">3tsk bakpulver</div>
<div class="line">7dl vetemjöl</div>
</div>
</div>
<p>Blanda, rulla små bollar, lägg på plåt, platta till, baka.</p>