Many interesting groups have a very geometrical definition: transformations that fix certain symmetries is one of the historical origins of group theory.

*n*-sided polygon. Thus, for a triangle, we can label the corners

*a,b,c*, reading clockwise, and enumerate the possible transformations by the positions the corners end up in. Thus we get the elements:

^{2}: abc -> cab

^{2}: abc -> cba

So now, all of a sudden, instead of just the abstract notion of a group, we have a bunch of transformations of a specific vector space. This is neat, interesting and well worth study. It gets even better since it turns out that we can tell a lot about the group itself by studying vector spaces that it acts on.

## Representations of groups and rings

The example we just saw is one of the most obvious examples of a group representations. There are two kinds of representations very commonly used - one of them realizes the group as a geometric entity: a set of transformations on a vector space, and the other - the permutation representations - realizes the group as a bunch of permutations.

Now, we can view ordinary vector spaces over a field as a bunch of vectors, and a set of geometric transformations of them; namely "only" scaling. Multiplication by a scalar stretches the space. In this way, looking at modules over an algebra over a field is an easy extension: instead of only allowing stretching, we add other transformations, and require that these transformations cooperate in a way that is described by the ring we represent. Thus, we have some set of transformations of the space that correspond to the ring elements, and require that multiplying ring elements corresponds to composing transformations.

With the group ring described above, we thus get what we expect for the group ring modules: they have transformations corresponding to the elements of the group rings, and composing transformations corresponds to multiplying elements of the group ring.

The benefit of this level of abstraction is that all of a sudden, we can start building these geometric objects from other things than just groups: finitely presented algebras, quiver algebras, categories, partial orders - to just mention a few of the things we can study representations of. For each of these, the representation theory is the study of the structure of modules over the structures - i.e. ways to manipulate the space with elements from the structure.

But, for now, back to the group case. Given the example above, it's not entirely clear whether all group elements must be distinct in the representation - so let me immediately state an answer to this: we only require group elements to be assigned to any linear transformation in a way that is coherent with the rest. Thus any group has the trivial representation, where any group element is mapped to the identity map of the vector space.

## Ordinary contra modular representations

**Maschke's theorem**:

*If θ is a representation of a group G in a vector space V over a field k - i.e. a map that takes each group element to a linear map V->V, and the characteristic of k does not divide |G|, then any invariant subspace U in V has a complement W such that [tex]V=U\oplus W[/tex] and the transformations from G keep inside each component.*

This means, furthermore, that every matrix representing a group element can be written on a form with blocks chained along the diagonal and zeroes everywhere else

This has a couple of alternative formulations. One of the formulations that I like the most is that the higher Ext-groups always vanish if the characteristic of the field doesn't divide the group order.

We can always have the case that we cannot find invariant subspaces
under the group action - in this case we have what is called a *simple
module*. By Maschke's theorem, this is the same thing as being
*irreducible* - which means that we cannot decompose it into a direct
sum, or equivalently that the matrix representations have only one
block, filling out the entire matrix. Thus the simple modules end up
being the building blocks of modules - and Maschke's theorem tells us
that when we build modules, the building blocks remain distinct.

Because we have Maschke's theorem for the ordinary case, things turn out
to be reasonably easy, and we end up studying mainly the trace of the
matrices that the group elements map to. This turns out to be
independent of the basis choice for the space we're looking at, and a
very important invariant of nice representation situations - called the
*character*.

Suppose the condition in Maschke's theorem doesn't hold. That is, we have a field k of characteristic p - so that in that field 0=1+1+...+1 (p times). Over this field, we throw up a vector space, and we make sure that a finite group with p dividing the group order has some sort of action on the vector space - i.e. we have transformations corresponding to the group elements.

We cannot trust that the simple modules glue together neatly - it's throughout probable that they end up sticking together tighter than expected. All is not lost, however. First off, the ways that the simple modules can stick together are parametrized by the Ext-groups - or in topological terms, the cohomology of the group.

Over the coming time I will explore the modular representation theory in this blog - as I learn the details myself. If you follow these posts, I'll tell you about decompositions of algebras, about Brauer characters - which is a way to adapt the character theory that ordinary representation theory is all about to the modular case, about blocks and most probably also about Donovan's conjecture.