Representations of categories
The basic tenet of representation theory is that we have some entity -
the group representation theory takes a
group, the
algebra representation theory most often a
quiver, and
we look at ways to view the elements of the structure as endomorphisms
of some vectorspace. The attentive reader remembers my last
post
on the subject, where [tex]\mathbb R^2[/tex] was given a group action
by the rotations and reflections of a polygon surrounding the origin.
There is a way, suggested to me by Miles Gould in my last post, to unify
all these various ways of looking at things in one: groups - and quivers
- and most other things we care to look at are in fact very small
category.
For groups, it's a category with one object, and one morphism/arrow for
each group element, and such that the arrow you get by composing two
arrows is the one corresponding to a suitable product in the group. A
quiver is just a category corresponding to the way the quiver looks -
with objects for all vertices, and arrows for all edges.
Given a category C, we can represent it, by just forming a functor
F:C->Vectk - which is to say we choose a vector space over the
field k for each object in the category, and a linear map for each
arrow.
For the case of group representations, the group as a category has just
the one object, and so we choose one single vector space. Then the group
has a whole bunch of arrows - and these all need to correspond to linear
maps within this vector space.
So, in the end, since all representations theories are in some way the
same, and indeed, there's a bunch of features that carry across several
theories.
The structure of modules
As we saw above, representations are more or less the same thing. Which
thing this same thing is, becomes apparent if we introduce some more
algebraic terminology.
Suppose A is some algebraic structure over a field k, which is to
say that A is a vectorspace over k, with some set of operations.
Typical instances are associative algebras, Lie algebras, A∞-
or L∞-structures et.c. Then, we say that a module M over the
algebra A is a vectorspace over k, with the operations of the
algebra A adjusted to work as operations on the module as well.
This, however, is formulated now to a generality that kills off the
statement, so I'm going to specialise in order to have something decent
to talk about.
So, a module over the associative algebra A is a vectorspace M,
together with a multiplication [tex]A\otimes M\to M[/tex] that ends up
being associative, and distributive, in comfortable ways.
But this is nothing other than saying that we can apply the elements of
the associative algebra to the elements of the module in order to get
new elements of the module. Thus, being a module over an algebra means
that the algebra transforms the module, and so a module is a
representation of the algebra.
This means that in order to understand group representations, we want to
understand modules over the group algebra - instead of considering
embeddings of the group as linear maps within a vector space, we
consider linear combinations of group elements as objects acting on a
vector space, forming a module.
Building blocks of modules
So, our grand question thus is: if we know that a certain vector space
is a module over something or other, what can we do? What can we say
about the way this works?
First of all, we could try to figure out what kind of building blocks
they might have. One obvious first step is given through the
Krull-Schmidt theorem. This states that if A is an associative algebra,
and M is an A-module, then the decomposition of M into a direct sum of
submodules that cannot be further decomposed that way is unique.
So, any module is a direct sum of indecomposable modules. And the module
action of the ring, thus, is given by block matrices with (possibly)
nonzero blocks on the diagonal, corresponding to the indecomposable
submodules, and zeroes everywhere else. The proof of this theoerm
follows by induction, reasoning with dimensions of vector spaces and
with juggling the a few linear maps back and forth until everything
fits.
As a corollary, we also can cancel modules in direct sum expressions.
That is, if [tex]M\oplus U\cong M\oplus V[/tex], then [tex]U\cong
V[/tex].
Under good circumstances, any such building block - any indecomposable
module - would also be simple, which is to say that it has no subspace
invariant under the action. If that is the case, then the fun ends right
about here, and all we need to care about is the structure of simple
modules, and then everything is done.
For the modular case though - such as group representations over a field
of characteristic p, where p divides the group order, we're not that
lucky. We end up with a situation where simples and indecomposables are
different. We can still use Krull-Schmidt, and reduce down to
indecomposable modules, but the indecomposable modules may very well be
quite complex in terms of simple modules. So we'd need to study both.
We'll start, however, with the part that's reasonably easy to study.
Simple modules
A simple module is a module without submodules - that is, a vectorspace
with some action of the algebra, that doesn't have invariant subspaces.
If we figure out what these look like, we're a good step forward on our
path to understanding what modules can look like. In the ordinary case,
we're in some ways done - since simples are indecomposables, and any
other module a direct sum of such. For the modular case, though, there's
still work to be done.
But let us start with the very basics. Suppose A is a finite-dimensional
algebra with unit over a good enough field of characteristic p. A module
M is said to be simple if the only submodules of M are M and 0. It is
said to be semisimple if it is a direct sum of simple modules. An
algebra is simple or semisimple if it is this as a module over itself.
Now, submodules of semisimple modules are just selections of the simple
modules that make out the supermodule - since otherwise, we could
intersect the submodule with a summand, and get a submodule. This is
impossible, since the smallest summands are simple.
In semisimple modules, every submodule is a direct summand. This means
that simple modules cannot glue together within semisimple modules - all
building blocks are distinct and have well defined boundaries.
There are two important tools we use to analyze algebras. One is the
radical, and the other is the socle. Both of these are given by a
bunch of equivalent descriptions:
The radical rad(A) of the algebra A is
- The smallest submodule I in A such that A/I is semisimple
- The intersection of all the maximal submodules of A
- The largest nilpotent ideal of A
The radical of a module M is rad(A)