# Coordinatization with hom complexes

These are notes from a talk given at the Stanford applied topology seminar by Gunnar Carlsson from 9 Oct 2009. The main function of this blog post is to get me an easily accessible point of access for the ideas in that talk.

## Coordinatization

First off, a few words on what we mean by coordinatization: as in algebraic geometry, we say that a coordinate function is some [tex]X\to\mathbb R[/tex] or possibly some [tex]X\to\mathbb C[/tex], with all the niceness properties we'd expect to see in the context we're working.

A particularly good example is Principal Component Analysis which yields a split linear automorphism on the ambient space that maximizes spread of the data points in the initial coordinates.

## Topological coordinatization

In topology there are ways around to compute spaces of maps directly
from available homological information. The key phrase here is the
*Eilenberg-Moore spectral sequence*, by which we compute
[tex]H^*(\Omega X)[/tex] using [tex]H^*(X)[/tex]. We note that
[tex]\Omega X = Top_*(S^1,X)[/tex] with the *compact open* topology.

Variations of this spectral sequence allow us to compute [tex]H^*(Map(X,Y))[/tex]. A homotopy class of maps thus corresponds to a cohomology element from [tex]H^*(F(X,Y))[/tex].

## Monad approximation

In the limit, we get the union [tex]Sp^\infty(X)[/tex].

Suppose we are really looking for maps [tex]Z\to Sp^\infty(X,*) \supseteq X = Sp^1(X)[/tex].

Recall: [tex]f,g[/tex] are chain homotopic if there is a family [tex]h_i: C_i\to D_{i+1}[/tex] such that [tex]f-g = \partial_D h_i + h_{i-1}\partial_C[/tex].

How, though, do we get from [tex][Z,Sp^\infty(X)][/tex] to [tex][Z,X][/tex]?

## Cosimplicial spaces

*total space*or

*realization*given by

In the dual setting of cosimplicial spaces, the total space is a subspace of [tex]X^0\times(X^1)^{\Delta[1]}\times(X^2)^{\Delta[2]}\times(X^3)^{\Delta[3]}\times\dots[/tex].

The mapping space [tex]Top_*(Z,X)[/tex] maps to [tex]Top_*(Z,Sp^\infty X)[/tex], which sends [tex]H_*(Hom(C_*Z,C_*X))[/tex] to chain maps. Differentials give us conditions on chain maps to come from actual maps on the topological level [tex]Z\to X[/tex].

Suppose now that [tex]X[/tex] is a simplicial complex, and [tex]H_*X[/tex] is known, say isomorphic to [tex]H_*S^1[/tex].

We pick [tex]S^1[/tex] as a model space, and work in [tex]Hom(C_*X,C_*S^1)[/tex]. We pick out conditions form the spectral sequence above to pick out chain maps that might come from topological maps.

Suppose we had some chain map [tex]C_*X\overset{\varphi}{\to}C_*S^1[/tex] that sends [tex]\sigma\mapsto\sum_{i=0}^n\eta_i[/tex].

We look for a chain homotopic map [tex]\hat\varphi[/tex] such that [tex]\hat\varphi(\sigma)[/tex] is a simplex (this will be WAY too optimistic, usually) or at least [tex]\hat\varphi(\sigma) = \sum_{i=0}^n\eta_i[/tex] for small [tex]n[/tex] and [tex]\eta_i[/tex] close together in graph distances.

Yi Ding: there are likely many simplicial maps [tex]X\to S^1[/tex] inducing any given [tex]H_*X\to H_*S^1[/tex]. This makes optimizations on quality of maps (as measured in smallness properties of preimages of given simplices) troublesome, since we have a large search space and are likely to get stuck in local minima.

The topological approach creates very many maps, but we just want ONE - albeit with quality assurances. The big question is how to get there:

Shrinking the space? Homotopy collapses to reduce the number of generators?

Ask for more side conditions? If so, which?

Minimize the size of preimages and kernels - smoothing the resulting map?

Run with one condition until we reach a minimum, then change conditions to get out of that particular sink, repeat until it all looks stable?

Thus we need to search for strategies to improve convergence to actually good maps.

The questions we pose are similar to:

Jeff Ericksson, et al.: Given X, [tex]x\in H_n(X)[/tex], how do we find a good representative [tex]\xi\in x[/tex]?

Finally: many problems in data analysis call for analyzing contractible spaces. So if we pick out the notion of a boundary, doing all of this relative to a boundary would give us increased power to use the resulting methods in data analysis.