A question about Homotopy (Michael Harris's recent book)

361 Views Asked by At

In the recent book "Mathematics without Apologies: Portrait of a Problematic Vocation" by Michael Harris there is some passage I want to call your attention on. Specifically, pages 211-212. Could anyone here elaborate on the problem of the different steps and the time interval he refers to on the first paragraph of page 212? I mean, his formulation is pretty clear to me, I just wanted to know more about the history of the problem, the practical and theoretical solutions proposed in the field and so on.

Here you have a brief summary:

On Figure 7.5 on page 211 the homotopy which goes from the empty set sign to the infinity sign is presented. He then discusses the losses of information which are allowed, on the example of a soft pretzel and makes a brief reference to Voevodsky.

The crucial part, though, is the following paragraph, which I copy to you.

But working rigorously with homotopies presents new problems. To make it a mathematical theory, we have to imagine each homotopy as a stretching /shrinking procedure that takes place in a fixed time interval, say one second. A problem is already apparent in Figure 7.5: each of the three steps is supposed to last one second, but the whole procedure is also a homotopy, and, therefore, wants to last one second. The traditional solution is to speed up the intermediate steps to make the total come out to one second. But there are (infinitely) many ways to do this, and the set of all such ways is itself a topological space: how do we choose the right one? The traditional answer is that there is no right answer: for the riddles topology was originally invented to solve, it doesn’t matter which way you choose."

An then he moves on to mention in the next paragraph, how from the 60s on topologists concerned with keeping track of all the intermediate choices sought (small-f) foundations for higher category theory. He then quotes Jacob Lurie’s development of $\infty$-categories, and so on.

Thanks in advance.

1

There are 1 best solutions below

1
On BEST ANSWER

The basic things we're looking at are paths, that is maps from the interval $I = [0,1]$ to some space $X$. If $X$ and $Y$ are nice enough spaces (I don't want to get bogged down in technicalities), then a homotopy between two maps $f, g : X \to Y$ is the same thing as a path $h : I \to Y^X$ such that $h(0) = f$ and $h(1) = g$, so this whole discussion applies in particular to homotopies.

When you've got two paths $\alpha, \beta : I \to X$ such that the end of $\alpha$ coincides with the beginning of $\beta$ (IOW, $\alpha(1) = \beta(0)$), then you can concatenate the paths $\alpha$ and $\beta$, and obtain a new path $\beta \alpha$ (written like function composition). The traditional way of doing this is to define: $$(\beta\alpha)(t) = \begin{cases} \alpha(2t) & 0 \le t \le \frac{1}{2} \\ \beta(2t-1) & \frac{1}{2} \le t \le 1 \end{cases}$$ Since $\alpha(1) = \beta(0)$, this is well defined and continuous at $t = 1/2$. Basically, you go through the path $\alpha$ at double speed, then through the path $\beta$ at double speed.


From now on, for simplicity, I will consider loops. Fix some base point $x_0 \in X$, and define $$\Omega X = \{ \gamma : I \to X \mid \gamma(0) = \gamma(1) = x_0 \},$$ in other words paths that start and end at $x_0$. This avoids the discussion about "$\alpha(1) = \beta(0)$", because you can concatenate all loops (they all start and end at the same point), and it's enough to have a general idea of the theory.


But there is a problem now. Suppose you have three loops $\alpha, \beta, \gamma \in \Omega X$. You want to concatenate them. You have basically two ways of doing that:

  • first concatenate $\alpha$ and $\beta$, then concatenate the result with $\gamma$, to get $\gamma(\beta\alpha)$;
  • or first concatenate $\beta$ and $\gamma$, and then concatenate $\alpha$ with the result to obtain $(\gamma\beta)\alpha$.

Then, unless all three paths are constant (if your space is Hausdorff), $$\gamma(\beta\alpha) \neq (\gamma\beta)\alpha \quad(!)$$ Concatenation isn't, in general, associative. The two loops $\gamma(\beta\alpha)$ and $(\gamma\beta)\alpha$ are homotopic, but not equal. There is also no "unit loop" $e$, such that if you concatenate it with another loop you get the original loop (i.e. $e \gamma = \gamma$ or $\gamma e = \gamma$).

So how do we solve this? One possible solution is Moore loops. Instead of requiring all paths to take "one second" ($I = [0,1]$) to complete, you consider more general paths of the form $\alpha : [0,T] \to X$ where $T \ge 0$ (s.t. $\alpha(0) = \alpha(T) = x_0$). Then given $\alpha : [0,T] \to X$ and $\beta : [0,T'] \to X$ that start and end at $x_0$, you can obtain a new path $\beta\alpha : [0,T+T'] \to X$ defined by $$(\beta\alpha)(t) = \begin{cases} \alpha(t) & 0 \le t \le T \\ \beta(t-T) & T \le t \le T+T' \end{cases}$$ Then you can verify that this defines an associative law, and the constant loop $\mathrm{cst}_{x_0} : [0,0] \to X$ is a unit.

Moore loops are good enough for some purposes, but they're also unsatisfying for some others: two different loops can have a different domain, and the concatenation of two loops has again a different domain. We're really interested in understanding $\Omega X$, so what can we do?

The answer lies in the little intervals operad $\mathtt{D}_1$. I refer you to your other question where I've written an explanation of what it is, and I will use the same notation; when $n=1$, $\mathtt{D}_1$ is called the little intervals operad because $D^1 = [-1,1] \cong I$ is just an interval; in what follows I will identify $D^1$ with $I$, it makes no difference.

A fundamental feature of $\Omega X$ is that it is an algebra over the little intervals operad. What does this mean? Let an operation in arity $r$: $$c = (c_1, \dots, c_r) \in \mathtt{D}_1(r)$$ Recall that this means $c_i : I \to I$ is an affine embedding ($c_i(x) = t_i + \lambda_i x$, $0 < \lambda_i < 1$), and $c_i((0,1)) \cap c_j((0,1)) = \varnothing$ for $i \neq j$. Let also loops $\alpha_1, \dots, \alpha_r \in \Omega X$. Then you can define $c(\alpha_1, \dots, \alpha_r) \in \Omega X$ by $$c(\alpha_1, \dots, \alpha_r) : t \mapsto \begin{cases} \alpha_i(u) & t = c_i(u) \text{ for some} i,u \\ x_0 & \text{otherwise} \end{cases}$$

What does this look like? An element, say, $c \in \mathtt{D}_1(3)$ looks like this:

enter image description here

If you have three loops $\alpha, \beta, \gamma \in \Omega X$, then $c(\alpha,\beta,\gamma)$ is the loop that's equal to $\alpha$ (sped up to match the length) on the red part "1", $\beta$ on the blue part "2", and $\gamma$ on the green part "3". On the rest (the black part), it's equal to $x_0$. Since the endpoints of $\alpha$, $\beta$, and $\gamma$ are all equal to $x_0$, this defines a continuous loop $c(\alpha, \beta, \gamma) \in \Omega$. Then one can check that this defines an algebra over $\mathtt{D}_1$.

So what does this have to do with associativity? Consider the element $m \in \mathtt{D}_1(2)$ given by embedding the first interval as the first half $[0,1/2]$ (so $c_1(t) = t/2$) and the second interval as the second half $[1/2,1]$ ($c_2(t) = (t+1)/2$). Then for $\alpha,\beta \in \Omega X$, $m(\alpha,\beta) = \beta \alpha$ is the concatenation of the two loops.

You have two different operations, $u = m(m, \operatorname{id})$ and $v = m(\operatorname{id}, m)$ in $\mathtt{D}_1(3)$, and $u(\alpha,\beta,\gamma) = \gamma(\beta\alpha)$ while $v(\alpha,\beta,\gamma) = (\gamma\beta)\alpha$. But now, there is a homotopy given by the structure of $\mathtt{D}_1$-algebra on $\Omega X$ between $u(\alpha,\beta,\gamma)$ and $v(\alpha,\beta,\gamma) = (\gamma\beta)\alpha$! It's in fact a path in $\mathtt{D}_1(3)$, given by rescaling the intervals. And more generally if you have more loops, any parenthesization will be homotopic to any other (so say $((\alpha_1 \alpha_2) \alpha_3) \alpha_4 \sim (\alpha_1 \alpha_2) (\alpha_3 \alpha_4)$ through a homotopy that comes from the $\mathtt{D}_1$-algebra structure.

Such a structure is called a strongly homotopy associative algebra, or $A_\infty$-algebra, because you don't just know that $a(bc) \sim (ab)c$: you have a specific homotopy between the two (for all $a$, $b$, $c$), and these homotopies are all compatible with one another (meaning, for example, that if you have two homotopies $((ab)c)d \leadsto a(b(cd))$, then they are equal, similar to the coherence axioms for a monoidal category), and so on in higher arity. The associahedra of Stasheff are combinatorial models for how these homotopies are compatible.

The recognition principle of Boardmann–Vogt and May also tells you that under technical conditions, if a space $Y$ can be endowed with the structure of a $\mathtt{D}_1$-algebra, then there is another space $X$ such that $Y \sim \Omega X$. So in that sense, the little interval operads exactly captures what it means to be a loop space.

This is the beginning of a very long story that's still being developed today. For example, an algebra over the little disks operad $\mathtt{D}_2$ is (by the recognition principle) essentially the same thing as a two-fold loop space $\Omega^2 X = \Omega(\Omega X)$. It's basically a strongly homotopy associative algebra with "one level" of commutativity, meaning there's a homotopy $ab \leadsto ba$ (but no it's not strongly homotopy commutative). Equivalently, it's the same thing as two strongly homotopy associative structure on the same space that are compatible with each other in the sense of the Eckmann–Hilton argument. There are also applications for $\infty$-categories that you mention in your question, where the associativity axiom of a standard category is relaxed and instead only holds up to (coherent) homotopy.