I am currently doing a project on a proof of the Borsuk--Ulam theorem. The proof im trying to understand is in Matouseks book "Using the Borsuk–Ulam Theorem". Im trying to understand the following thing.
There is a space $H$ of affine maps $h: \mathbb{R}^n \rightarrow \mathbb{R}^n $. The map is fully determined by the maps of n vectors (called vertices). For the sake of this exercise we call a map $\textbf{generic}$ if these $n$ vertices are mapped to $n$ linearly independent vectors in $\mathbb{R}^n$
We need to show that for any $f \in H$, there is a generic map arbitrarily close to $f$ (regarding sup-norm). The idea is that we can see the space $H$ are the real space $R^{n^2}$, where the images of the $n$ vertices are written in a vector $ \in \mathbb{R}^{n^2}$ For instance, a map $g \in H$ can be written as: \begin{bmatrix} g_1(v_1)\\ g_2(v_1)\\ ..\\ g_n(v_1)\\ ..\\ g_n(v_n) \end{bmatrix} If a map is $\textbf{not}$ generic, the matrix A: \begin{vmatrix} g_1({v_1})&g_1({v_2})&...& g_1({v_n})\\ ...\\ g_n({v_1})&g_n({v_2})&...& g_n({v_n})\\ \end{vmatrix} has determinant $det(A) = 0$.
Heres where it gets tricky and Im not sure how to continue. Notice that the maps which are not generic, are contained in a variety, as they are the zeros Z(A) of the polynomial $det(A)$. In the book, he uses that according to Sards theorem, the non-generic maps in $H$ have measure 0. I have a couple of questions about that:
i) how do i think of this polynomial? is it a polynomial of $n^2$ variables and no coefficients?
ii) how can we get from Sards theorem to varieties as essentially, Sards theorem is about critical points.
iii) assuming we know ii) and thus the set of non-generic maps has measure zero in $\mathbb{R}^{n^2}$, how can we relate it to the fact that for every map $f \in H$ there is a generic map arbitrarily close to f. I have a basic understanding of measure theory, so i understand that it means that this subspace of $H$ is "small", but how can I show this rigorously?
Please let me know if you need more context. Alternatively, the book is easy to find as PDF online, the proof im talking about is in chapter 2.
Here is my proof of the exercise 1(c) of the chapter you're referring to, using elementary means and the notations given in the book.
Let $\Sigma$ be the vector space of affine maps $h: \sigma \rightarrow \mathbb{R}^n$, given by $h(x)=Ax +b$ if $A\in \mathbb{R}^{n+1\times n}$, $b\in \mathbb{R}^{n}$ and $ x \in \mathbb{R}^{n+1}$. Without loss of generality we choose $\sigma$ as the simplex with the standard basis of $\mathbb{R}^{n+1}$ and the origin as vertices. Using the notation $e_0:=0$ we get $V(\sigma)=\{e_0,\dots,e_{n+1}\}$. Every $h\in \Sigma$ is given uniquely by its evaluation at the vertices of $\sigma$. We can identify each $h\in \Sigma$ with a vector in $\mathbb{R}^{n(n+2)}$. Let $A_i \in \mathbb{R}^n$ be the columns of $A$, then every $h\in\Sigma$ is given by \begin{equation*} (h(e_0),\dots,h(e_{n+1})) = (b,A_1+b,\dots,A_{n+1}+b) \in \mathbb{R}^{n(n+2)} \end{equation*} Now if $h$ is nongeneric, we can choose $n$ vectors out of $(h(e_0),\dots,h(e_{n+1}))$ such that the resulting $n\times n$ matrix $M$ is singular. Because the singular $n\times n$ matrices are nowhere dense in $\mathbb{R}^{n^2}$, this follows as well for the set of the nongeneric $h\in \Sigma$. Let $h\in \Sigma$ be nongeneric. The set $h^{-1}(0)$ intersects a face of $\sigma$ with dimension smaller than $n$. We can find $n$ or less vertices of $\sigma$, such that a zero of $h$ lies in their convex hull. Mathematically speaking there is a permutation $p: \{0,1,\dots,n+1\} \xrightarrow{} \{0,1,\dots,n+1\} $ such that for $\alpha \in \mathbb{R}^n$ with $0\le \alpha_i \le 1$ and $\sum_{i=1}^n \alpha_i=1$ we have \begin{align} h(\sum_{i=1}^n \alpha_i e_{p(i)}) = 0 \iff b+\sum_{i=1}^n A_{p(i)}\alpha_i = 0 \end{align} We choose $M = (M_1,\dots,M_{n})$ using $M_i = h(e_{p(i)}) = A_{p(i)} +b$. Using elementary column operations we change the first column into $\tilde{M}_1 = \sum_{i=1}^n \alpha_i M_i$. We get \begin{equation*} \tilde{M}_1 = \sum_{i=1}^n \alpha_i M_i = (\sum_{i=1}^n \alpha_i) b + \sum_{i=1}^n \alpha_i A_{p(i)} = b +\sum_{i=1}^n \alpha_i A_{p(i)} = 0 \end{equation*} Therefore $\det(M)=0$ which concludes the proof.
To answer your question iii), the nowhere density implies that if you imagine the $h\in\Sigma$ as a point in $\mathbb{R}^{n(n+2)}$ as above, in every neighborhood of that point you find lots of points which represent functions of $\Sigma$ which are generic. So by changing any defining value of $h$ (namely the matrix $A$ and the vector $b$) by an arbitrarily small margin you can find a generic function.