The definition of a chain complex in algebraic topology requires that $d_n\circ d_{n+1}=0$, but why is that? It feels somewhat arbitrary, I have not found it motivated, and it is not obvious to me that this is an interesting constraint. Why not require instead (for example): $d_n\circ d_{n+1}\circ d_{n+2}=0$? Why is $d_n\circ d_{n+1}=0$ more interesting? Also, what do we get if we drop this requirement entirely? Is a chain of e.g. groups, related by some homomorphisms without any restriction, an interesting construction? I am looking for some intuition/justification behind this requirement (and I'm beginning AT).
Chain complex and its homomorphisms
311 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
The motivation for the boundary operator starts with the topological concept of boundary: If $M$ is a $k$-dimensional manifold with boundary, then $\partial M$ is $k-1$ dimensional with boundary whose boundary is empty, in other words $\partial \partial M = \emptyset$.
The original chain complexes themselves were constructed from simplicial complexes. From a topological perspective, a $k$-simplex $\Delta$ is an example of a $k$-manifold with boundary, in particular it is a $k$-ball. Its topological boundary $\partial \Delta$ is clearly a union of $k-1$ simplices and it is a $k-1$ sphere. Now the idea is to come up with an algebraic formula for $\partial\Delta$, as a linear combination of the $k-1$ simplices lying on that sphere, and to craft the formula so that $\partial\partial\Delta = 0$. The formula for $\partial\Delta$ is the result of that crafting. One can also get a specific geometric motivation for that formula itself.
Added: In the comments the OP also asks: "Why is that requirement worthwhile in a context that is not apparently geometric?"
The original context considered by Poincaré was not a particularly geometric context. Instead it was, as we now word it, a topological context.
Pondering the use of the $\partial$ operator in the context of manifolds, and of the equation $\partial\partial M = \emptyset$ in that context, one might now wonder further (as presumably Poincaré did) whether deeper mathematical information could be extracted by some kind of algebraic abstraction of that operator and that equation. What would be the motivation for wondering that? Why would the brain leap to that kind of oddball generalization? I wish I could ask Poincaré! But, that's how mathematics progresses. Wild, oddball generalizations, however improvident they may seem to be, somehow often work in mathematics, if they are supported by deep experience. In retrospect, we see that Poincaré was absolutely right about this particular wild generalization.
But besides personal speculations, let's look at the mathematical setting. Looking at a simplicial complex $X$, which is made of simplices $\Delta$ on each of which the topological equation $\partial\partial\Delta = \emptyset$ is true, one might wonder whether there is a purely algebraic analogue of that equation, and one might further wonder whether it is possible to extract deeper topological information about $X$ by using algebra in that fashion.
And the point is simply: It works! There is a purely algebraic analogue which does allow one to extract deeper topological information.
This is what one learns in algebraic topology, if you haven't got to it already.
When one constructs a chain complex out of the simplices of $X$, with chain maps $\partial_n : C_n(X) \to C_{n-1}(X)$ defined by that oddball formula (which is really just the right-hand rule in disguise), one can then prove the algebraic equation $\partial_{n}\partial_{n+1}=0$.
Another addition, to address the funny equation $\partial_{n} \partial_{n+1} \circ \partial_{n+2}=0$: Furthermore, the utility of the equation $\partial_{n}\partial_{n+1}=0$ (as opposed that funny equation) becomes clear when you apply it to prove the inclusion $$\text{image}(\partial_{n+1} : C_{n+1}(X) \to C_n(X)) \subset \text{kernel}(\partial_n : C_n(X) \to C_{n-1}(X)) $$ This inclusion then allows you to define the homology groups of that chain complex, which are the quotient groups $$H_n(X) = \frac{\text{kernel}(\partial_n : C_n(X) \to C_{n-1}(X))}{\text{image}(\partial_{n+1} : C_{n+1}(X) \to C_n(X))} $$ These groups $H_n(X)$ are topological invariants of $X$, meaning that two simplicial complexes which are homeomorphism have isomorphic homology groups. Not only that, their evident computability (using nothing more than a smattering of linear algebra and number theory) makes those topological invariants computable, and therefore very useful as tests for the failure of homeomorphisms.
This might be easiest to think about if you take the groups in the complex to be vector spaces.
So: I have a vector space $V$, and I have, let's say, seven vectors in this vector space. I want to know how far they are from being linearly independent, which means I want to know how many different ways there are for some linear combination of them to equal zero.
So I take a seven-dimensional vector space $W$, choose a basis, map the seven basis vectors onto the given seven vectors in $V$, and look at the kernel. The kernel shows me all the different ways in which some linear combination of my vectors add to zero. It's therefore sort of a measure of how far my original vectors are from being linearly independent. I'll therefore be very happy to find a spanning set for that kernel, containing, say, five vectors.
But those five vectors might be redundant! Perhaps only two or three or four of them are linearly dependent. It appears to be showing me five different linear dependence relations among my seven original vectors, but maybe only three of those are "genuinely different", while the others just follow trivially from the first three (i.e. they are linear combinations of the first three).
So to investigate this, I take another vector space $X$, this time five dimensional, I choose a basis, and I map the five basis vectors onto the five generators for the kernel of my map $W\rightarrow V$. I look at the kernel of my new map, I find generators, I wonder how redundant they are, so I map a new vector space onto that kernel, compute the kernel of this new map, and on and on we go.
Each time I add a map, it serves my purposes to map onto the kernel of the last map I defined. This automatically insures that the composition of any two successive maps is zero.
So sequences of maps, where any two successive maps compose to zero, arise naturally in this setting.
I have a feeling you might have to sit down with a pencil and paper and play with this awhile before it becomes completely clear --- but if you do, I think it will.
Edited to add in response to a comment: Let's go back to the boundary picture, though the underlying intuition is the same:
1) I know that the boundary of a boundary is zero. I'm wondering what else, other than a boundary, might have a boundary of zero.
2) So I take my boundary map $V\rightarrow W$. Its kernel contains everything with boundary zero. Some of those things are the boundaries I already know about, so I'd like to ignore them by modding them out.
3) Therefore I take a boundary map $X\rightarrow V$, which hits exactly the boundaries that I want to mod out. Homology at $V$ will now measure "everything with boundary zero that is not itself a boundary".
Now the point is this: I build my map $X\rightarrow V$ with the explicit purpose of trying to hit all the "uninteresting" parts of the kernel so I can eventually kill them off. If I'm trying to hit the uninteresting parts of the kernel, then I am, a fortiori, hitting the kernel. So the map $X\rightarrow V$ lands in the kernel of $V\rightarrow W$ and therefore the composition is zero.
This is an idea that goes beyond geometry and geometric boundaries. Often you want to understand the kernel of a map; you already understand part of the kernel, so that part's uninteresting. You build another map specifically designed to hit the uninteresting part and keep track of it for you. If you build a map designed to hit the uninteresting part of the kernel, it will in particular hit the kernel, giving you a composition of zero.