The why and the what of homological algebra

I seem to have become the Goto-guy in this corner of the blogosphere for homological algebra.

Our beloved Dr. Mathochist just gave me the task of taking care of any readers prematurely interested in it while telling us all just a tad too little for satisfaction about Khovanov homology.

And I received a letter from the Haskellite crowd - more specifically from alpheccar, who keeps on reading me writing about homological algebra, but doesn't know where to begin with it, or why.

I have already a few times written about homological algebra, algebraic topology and what it is I do, on various levels of difficulty, but I guess - especially with the carnival dry-out I've been having - that it never hurts writing more about it, and even trying to get it so that the non-converts understand what's so great about it.

So here goes.

Alpheccar writes that to his understanding, the idea is to build topological spaces out of algebraic gadgets, and then do topology on them. This is a part of the story, and certainly historically very important, but it is far from all of it.

Motivations

The revolution for homological algebra pretty much started with Eilenberg-MacLane - who wrote an articlebook that did the constructions necessary for the very topological versions of homological algebra - but without ever involving the actual topological spaces.

The point is that the way you do algebraic topology is that you tend to set up a functor Top → R-ChMod that assigns a chain complex of R-modules to each (nice enough) topological space, and then you add functors R-ChMod → R-Mod that extract informations from these. Typical examples are cellular chain complexes with coefficients somewhere nice for the first functor, and then homology or cohomology for the second functor - depending on what viewpoint is the most obvious.

The revolution was that we simply throw out that first functor.

In order to study (co)homology, we don't really need to care that there was a topological space somewhere to begin with. We only, really, need a nice enough category of chain complexes (if it's abelian, then that's fine - we get the long exact sequences in homology and other niftiness easily then, but if it's not, triangulated will do...) and we study certain types of functors from these to module categories.

Homological algebra as a tool for algebraic topology

Since, in the viewpoint introduced above, homological algebra is a part of the process used in algebraic topology, it turns out to be really neat to sit down and just prove a lot of neat results in homological algebra - with the background that at some later point, these might be useful once you sit down with the topology. I got hold of this particular point early - I had started my MSc thesis work in homological algebra before I took my first real topology course, and during that course, the less pointset topology and the more algebraic topology we did, the easier everything got. The fundamental results we needed to grasp to do algebraic topology in any amount of seriosity were basically just applications of all the cornerstone results in homological algebra, and thus perfectly obvious to my clique of arrogant undergrads.

This particular piece goes far. Why don't we need to worry about whether we're doing homology or cohomology? Answer: since Ext and Tor are dual in certain specific ways, which ends up meaning that although internal algebraic structures might be finicky, the module structure is very neat, and in k-Mod = Vectk, we end up with no worries anywhere.

Testing grounds

The viewpoint of homological algebra as a tool for algebraic topology goes pretty deep. When I ask my advisor what to put in texts where I motivate why our field is important, in the standard answer he gives me the following pops up:

Group cohomology is important, since it is a field where topological methods can be tested reasonably safely, since we have the group theoretic attack vectors in addition to the purely topological.

On the other hand, group cohomology also turns out to be important, since we get important information for the study of groups out of the homological algebra side of things.

Low order Ext

The area where this is most notable is in representation theory. This field comes in several flavours: group representations, where we study kG-Mod for some (sometimes finite) group G; Lie algebra representations, where we study g-Mod for some Lie algebra g; quiver representations, where we study kQ-Mod for some (finite) quiver algebra kQ - and so on. One question that tends to crop up here, and with a high degree of importance for the non-homological algebraists around me - is what happens if we know only parts of our group? Can we say something about the entire group based on that?

It turns out that we can. There are very neat correspondences between the lower order Ext groups over kG and the behaviour of G itself. I'm going to stick to group representations here, since that's the area I know best - however, this is something that pops up analogously all over the place.

Extensions

Suppose you have some R-module K that you know embeds, in some specific way, into some larger R-module M. And suppose you find the quotient L=M/K in some manner. What could, then, M be? One obvious answer is [tex]G=K\oplus L[/tex], but is this enough? This ends up depending on Ext1R(L,K), with each element of this particular Ext group indexing precisely one such extension.

This is at the core of Maschke's theorem, by the way, which says that if the characteristic of the field k doesn't divide the group order |G|, then by a specific construction, the only extensions possible for any kG-modules are the split extension - the one where we just take the direct sum.

This all leads to a wealth of useful information and ideas in representation theory. For instance, there is a way to describe modules proposed by Dave Benson and some co-authors, where you draw diagrams with each vertex being a simple module, occupying that spot in a composition series, and the edges being taken from the relevant Ext1.

Invariants and coinvariants

Suppose you have a group acting on a vector space. This can be taken extremely physical - quantum mechanics is all about this kind of situation, or so I've heard. Then it might be interesting to figure out the invariant subspace: {a|ga=a for all g in G}. This is Ext0. Or we might want a basis for the complement: representatives for every way that things can move. This is the coinvariant vector space, defined as A/(ga-a), and this is just Tor0.

Simples, projectives and the stable module category

Simple modules are nice. They don't have invariant subspaces. In the best of all worlds - which is when Maschke holds - simple modules are precisely the irreducible modules. However, when Maschke doesn't hold, we can have non-trivial Ext1, and thus we can build larger modules out of simples by a kind of gluing: they aren't just a nice direct sum of simples, but something ickier.

Thus, unless Maschke holds, there will be weird things happening in the module category.

These weird things, though, are controllable. To be specific, we can consider the smallest possible irreducible modules. These will end up being building blocks, and for nice enough worlds, these will also end up corresponding closely to the simples - in the way that we can allocate a simple to an irreducible projective in a bijective manner.

So ... what is this projective I keep throwing around? Take a free module. This is a direct sum of a finite number of copies of the ring R. This will have direct summands. By picking apart all summands into further direct summands, at some point we hit bottom: we cannot pick anything apart any longer. This is, by the theorem of Krull-Schmidt, a well-defined state of being. We can permute things, but in essence, a module is just its decomposition into irreducibles.

So, anything that is a direct sum of a free module is a projective. We can lose projectivity by taking quotients - so if we add relations, we may well get lost. But as long as we just look for direct summands, we're pretty much home free. Now, the irreducible projectives have to be summands of the ring R itself, so they end up actually being (left) ideals in the ring. And each of them corresponds intimately to a simple module.

One trick that's very beloved among the people who worry about these things is to get rid of anything projective, and look at the stable module category. In this, we just quotient away anything projective - morphism sets are taken modulo morphisms that factor through a projective... This way, we only have the "essential", or as it is known to the experts of the field "difficult" information left. Then Extn(M,N)=Hom(Ω:sup:nM,N), where Ωn is the nth syzygy - see below for more on this.

So, homological algebra lets us understand the stable module category, which in turn lets us understand the parts that are essential to the module category structure.

Resolutions

I just promised you I'd tell you about syzygies. First off, some personal information - because readers always love that!

If you find me on IRC, on EFNet or on Freenode, you'll find me under the nick Syzygy-. The - is there because there is someone who's been using Syzygy for years and years on EFNet and because I'm not deliberately trying to be a bastard if I can help it. The rest of the nick is there to a certain extent because I like the way I write it in longhand.

And to a certain extent because it is an epitome of why homological algebra is interesting in my eyes.

Suppose we are interested in a finitely presented module, which we might be for many reasons, including being interested in algebraic geometry and in solving systems of polynomial equations. We might then just figure out what relations hold within a set of generators, which gives us a generating set, and some relation set.

These, relations, though are far from guaranteed to be the whole story. It's probable that there are non-trivial relations between the relations. What do we do? We figure out what these are. They span the first syzygy module of the module we started with, denoted by ΩM. But this is unlikely to be free, so we can keep on going.

This way, we get a sequence of modules, all of which are free - since we just choose a generating set in each step - and with maps between them adding all the extra relations. But this is nothing other than a free resolution of the starting module. And here comes the candy that hooked me for this discipline: studying modules over their resolutions is the same thing as studying what chain complexes are, deep down, which in turn is the same thing as studying homological algebra.

Want to figure out what a module map means for the family of syzygies? What you really want is a chain map in the chain complex category. But some of these maps - or even portions of maps - will not carry relevant information. So we factor those away, and we get a slightly weirder category. But here, equality doesn't quite mean what it should, so we add in more equality relations. And suddenly, we live in a derived category - and in here, the Hom sets are Ext groups, and the tensor products are Tor groups.

Number theory, geometry, and computation!

To continue this tour de force, consider the theorems in vector calculus relating various triple and double integrals. (note - I never dealt with this. I rode on technicalities to root out calculus from my curriculum so it would fit more algebra....) These theorems, in the end, only state that [tex]\partial^2=0[/tex], which is at the core of what homological algebra is all about.

If we formalize this particular recognition a bit, and tug at the corners, we end up with de Rham cohomology, which deals with what you can do with differential forms on manifolds (layman speak: things you can integrate. The f(x)dx after the integral sign is a typical differential form) - and this is one of the many many places where cohomology rather than homology ends up being the "right" way to view things just because you started out as a geometer instead of .. well .. topologist or algebraist.

The same kind of thing happens in algebraic geometry as well. You start out happily with your varieties, you conclude that as soon as things get interesting, the nice and pretty concepts of coordinate rings don't hold up, and you're forced to go to coordinate sheaves. And then you try to figure out what you can do with sheaves of functions on a variety - and before you know it, you reconstructed sheaf cohomology. This, by the way, a quick look at wikipedia told me, lets you define euler characteristics for varieties in a way consistent with the classical uses of it.

I am no geometer, and I'm not the person to tell you about the intricacies of these things. If you understand them, though, I'd love to figure them out at some point.

The discussion of Khovanov homology is a slightly (though not very) similar thing to this. Again, I have no real idea, and am treading on thin ice here.

So, alpheccar. Is this what you asked for? Please tell me what more you want covered, and I'll write up some more! This was fun writing!

social