Boundary of boundary of singular cube is zero (Spivak)

581 Views Asked by At

At the bottom of page 99 of M. Spivak's Calculus on Manifolds he arrives at the formula

$$\partial (\partial c)=\sum_{i=1}^n \sum_{\alpha=0,1} \sum_{j=1}^{n-1} \sum_{\beta=0,1} (-1)^{i+\alpha+j+\beta} (c_{(i,\alpha)})_{(j,\beta)} $$

Here $c$ is a singular $n$-cube in $A \subseteq \mathbb{R}^n$; that is, a continuous function $[0,1]^n \to A$, and $$c_{(i,\alpha)}=c \circ (I^n_{(i,\alpha)}),$$ where $I^n(i,\alpha)$ is the identity function on $[0,1]^n$ applied to $x \in [0,1]^n$ with its i'th coordinate replaced by $\alpha \in \{0,1\}$.

The author then claims:

In this sum $(c_{(i,\alpha)})_{(j,\beta)}$ and $(c_{(j+1,\beta)})_{(i,\alpha)}$ occur with opposite signs. Therefore all terms cancel out in pairs and $\partial (\partial c) = 0$.

I can't see how to pair up the terms exactly: If $i=n$ for example, the term $(c_{(n,\alpha)})_{(j,\beta)}$ seems to be paired with $(c_{(j+1,\beta)})_{(n,\alpha)}$. But doesn't that third subscript take values only in $\{1,2,\dots,n-1\}$?

Can you please help me understand this?

Thank you!

1

There are 1 best solutions below

0
On BEST ANSWER

Let's try a small value of $n$ first to see how it works. The first nontrivial case is $n=2$, so take $c : [0,1]^2 \to A$ a singular $2$-cube. Its boundary is $$\partial c = \sum_{i=1}^2 \sum_{\alpha \in \{0,1\}} (-1)^{i+\alpha} c_{(i,\alpha)} = -c_{(1,0)} + c_{(1,1)} + c_{(2,0)} - c_{(2,1)}.$$

If $u : [0,1] \to A$ is a singular $1$-cube, its boundary is: $$\partial u = -u_{(1,0)} + u_{(1,1)} = -u(0) + u(1)$$

And so we get that: $$\begin{align} \partial^2(c) & = - \bigl(-(c_{(1,0)})_{(1,0)} + (c_{(1,0)})_{(1,1)}\bigr) + \bigl(-(c_{(1,1)})_{(1,0)} + (c_{(1,1)})_{(1,1)}\bigr) \\ & \mathrel{\hphantom{=}} + \bigl(-(c_{(2,0)})_{(1,0)} + (c_{(2,0)})_{(1,1)}\bigr) - \bigl(-(c_{(2,1)})_{(1,0)} + (c_{(2,1)})_{(1,1)}\bigr) \end{align}$$ Now how can we pair this?

  • The terms of the form $(c_{(2,0)})_{(1,i)}$ are those that look like $c(i,0)$, and so will be paired with the terms of the form $(c_{(1,i)})_{(1,0)}$. But the sign of $(c_{(2,0)})_{(1,i)}$ is $(-1)^{2+0+1+i} = -(-1)^i$, while the sign of $(c_{(1,i)})_{(1,0)}$ is $(-1)^{1+i+1+0} = (-1)^i$, so they are opposite and cancel.
  • Similarly, the terms of the form $(c_{(2,1)})_{(1,i)}$ are paired with those of the form $(c_{(1,i)})_{(1,1)}$, and the signs cancel again.

So hopefully we now see how it works. What happened above is that in $(c_{(i,\alpha)})_{(j,\beta)}$, if $i > j$, then we first "fix" the $i$th coordinate in $c(t_1, \dots, t_n)$ and then the $j$th; but if $i \le j$, when we fix the the $i$th coordinate to get $c_{(i,\alpha)}$, the new "$j$th" coordinate in $(c_{(i,\alpha)})_{(j,\beta)}$ is really the $(j+1)$st coordinate of $c$! Let's try to formalize that.

We have the set of indices: $$J_n = \{(i,\alpha,j,\beta) | 1 \le i \le n, \alpha \in \{0,1\}, 1 \le j \le n-1, \beta \in \{0,1\} \},$$ and $\partial^2(c) = \sum_{(i,\alpha,j,\beta) \in J_n} (-1)^{i+\alpha+j+\beta} (c_{(i,\alpha)})_{(j,\beta)}$. We separate $J_n$ in two parts: the set $J_n''$ of indices $(i,\alpha,j,\beta)$ such that $i > j$, and the set $J_n'$ such that $i \le j$. Then there is a bijection between $J_n''$ and $J_n'$ given by: $$\begin{align} \theta : J_n' & \to J_n'' \\ (i,\alpha,j,\beta) & \mapsto (j+1,\beta,i,\alpha) \end{align}$$ Of course, the bijection isn't chosen at random. If $(i,\alpha,j,\beta) \in J_n'$, ie $i \le j$, then: $$(c_{(i,\alpha)})_{(j,\beta)}(t_1, \dots, t_{n-2}) = c(t_1, \dots, \underbrace{\alpha}_{i\text{th position}}, \dots, \underbrace{\beta}_{(j+1)\text{st position}}, \dots, t_{n-2}),$$ and you see that it is equal to $(c_{(j+1,\beta)})_{(i,\alpha)}$. (At this point I should advise you to check what I've written, if you're not comfortable with all this. Try to see how it works for $n=2$ above, and maybe try $n=3$ if you're courageous. No amount of explanation can replace writing it out for yourself.)

But the signs in front of each are different because of the $+1$ in $j+1$, and so when you pair $(i,\alpha,j,\beta)$ with $\theta(i,\alpha,j,\beta)$ they cancel each other and in the end the sum is zero.


PS: If you're interested in this, this is essentially the combination of the proof that the singular cubes of a space form a (semi-)simplicial abelian group (I didn't mention degeneracies), and that the differential of the Moore complex of a simplicial abelian group has square zero.