Generalise to any dimension some notation

59 Views Asked by At

I would like your help to generalise to any dimension and in the most simple way the following piece of notation (written for dimension $3$).

Step 1: Consider the 3 dimensional random vector $\epsilon\equiv (\epsilon_0, \epsilon_1, \epsilon_2)$ with support the 3d Euclidean space $\mathbb{R}^3$.

Step 2: Consider the set $\mathcal{A}$ of all possible unordered pairs of elements from the set $\{\epsilon_0,\epsilon_1, \epsilon_2\}$, i.e., $$ \mathcal{A}\equiv \Big(\{\epsilon_1,\epsilon_0\}, \{\epsilon_2, \epsilon_0\}, \{\epsilon_1, \epsilon_2\} \Big) $$

Take the difference between the two components of each element in $\mathcal{A}$ and store them in a vector $\Delta \epsilon$, i.e.,

$$ \Delta \epsilon \equiv (\epsilon_1-\epsilon_0, \epsilon_2-\epsilon_0, \epsilon_1-\epsilon_2) $$

Step 3: Write down the support of $\Delta \epsilon$, i.e., $$ \mathcal{S}\equiv \{(a,b,c)\in \mathbb{R}^3 \text{ s.t. } c\equiv (a-b)\} $$


The notation that I'm struggling to generalise to any dimension is the one in Step 2, that, in turn, is crucial for Step 3. Indeed, there are many ways to represent $\mathcal{A}$: we could set $$ \mathcal{A}\equiv \Big(\{\epsilon_1,\epsilon_0\}, \{\epsilon_2, \epsilon_0\}, \{\epsilon_1, \epsilon_2\} \Big) $$ as above, but also $$ \mathcal{A}\equiv \Big(\{\epsilon_0,\epsilon_1\}, \{\epsilon_2, \epsilon_0\}, \{\epsilon_2, \epsilon_1\} \Big) $$ or $$ \mathcal{A}\equiv \Big(\{\epsilon_2, \epsilon_0\},\{\epsilon_2, \epsilon_1\}, \{\epsilon_1,\epsilon_0\} \Big) $$ and many more. Different representation of $\mathcal{A}$ leads to different definitions of $\Delta \epsilon$ and in turn to different definitions of $\mathcal{S}$. Any representation of $\mathcal{A}$ is fine with me, but I want to notationally transmit the idea that when once the reader has fixed a certain representation of $\mathcal{A}$, then the definitions of $\Delta \epsilon$ and $\mathcal{S}$ unambiguously follow.

1

There are 1 best solutions below

0
On BEST ANSWER

For every positive integer $n$ define the $n\times(n-1)$-matrix $D_n$ as $$D_n:=\begin{pmatrix} 1&-1&\hphantom{-}0&\cdots&\hphantom{-}0&\hphantom{-}0\\ 1&\hphantom{-}0&-1&\cdots&\hphantom{-}0&\hphantom{-}0\\ \vdots&\hphantom{-}\vdots&\hphantom{-}\vdots&\ddots&\hphantom{-}\vdots&\hphantom{-}\vdots\\ 1&\hphantom{-}0&\hphantom{-}0&\cdots&-1&\hphantom{-}0\\ 1&\hphantom{-}0&\hphantom{-}0&\cdots&\hphantom{-}0&-1 \end{pmatrix}.$$ Then for any vector $\epsilon=(e_1,\ldots,e_n)\in\Bbb{R}^n$ you have $$D_n\epsilon=(e_1-e_2,e_1-e_3,\ldots,e_1-e_n).$$ Now you want $\Delta_{\epsilon}$ to be, up to signs and ordering, the concatenation of the vectors $$D_n(e_1,\ldots,e_n),\quad D_{n-1}(e_2,\ldots,e_n),\quad D_{n-2}(e_3,\ldots,e_n),\ \qquad\ldots,\qquad D_2(e_{n-1},e_n).$$ Note that these vectors of the form $(e_m,\ldots,e_n)$ can be written as a product $(0_{(n-m)\times m}|I_{n-m})\epsilon$ where $0_{(n-m)\times m}$ denotes the $(n-m)\times m$-matix with all zeros, and $I_{n-m}$ the square identity matrix of size $n-m$. This makes the matrix $E_{m,n}:=(0_{(n-m)\times m}|I_{n-m})$ an $(m-n)\times n$-matrix, and so we can write $\Delta_{\epsilon}$ as a product of $\epsilon$ with the block matrix $\Delta$ defined as; $$\Delta:=\begin{pmatrix} D_n\\ D_{n-1}E_{1,n}\\ D_{n-2}E_{2,n}\\ \vdots\\ D_2E_{n-2,n} \end{pmatrix} \qquad\text{ so that}\qquad \Delta_{\epsilon}=\Delta\epsilon.$$ The support of $\Delta_{\epsilon}$ is then the image of this block matrix. Note that the matrix $\Delta$ above has $n$ columns and $\binom{n}{2}$ rows, so the codimension of the support (i.e. the number of equations to define $\mathcal{S}$) grows quadratically as $n$ grows. The fact that the vector $(1,1,\ldots,1)$ is in the kernel of $\Delta$ shows that the codimension is in fact even larger; it turns out that the dimension of $\mathcal{S}$ equals $n-1$, and so you will need $\binom{n}{2}-(n-1)=\binom{n-1}{2}$ equations to define $\mathcal{S}$.


To make this a bit more tangible (if only for myself!), I'll illustrate the case $n=4$. Then \begin{eqnarray*} D_4&=\begin{pmatrix} 1&-1&\hphantom{-}0&\hphantom{-}0\\ 1&\hphantom{-}0&-1&\hphantom{-}0\\ 1&\hphantom{-}0&\hphantom{-}0&-1 \end{pmatrix}, \qquad &\ \ D_3=\begin{pmatrix} 1&-1&\hphantom{-}0\\ 1&\hphantom{-}0&-1 \end{pmatrix}, \qquad &\ \ D_2=\begin{pmatrix} 1&-1 \end{pmatrix}\\ &\ &E_{1,4}=\begin{pmatrix} 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1 \end{pmatrix},\qquad &E_{2,4}=\begin{pmatrix} 0&0&1&0\\ 0&0&0&1\\ \end{pmatrix}. \end{eqnarray*} Then the block matrix is given by $$\Delta:=\begin{pmatrix} D_4\\ D_3E_{1,4}\\ D_2E_{2,4} \end{pmatrix} =\begin{pmatrix} 1&-1&\hphantom{-}0&\hphantom{-}0\\ 1&\hphantom{-}0&-1&\hphantom{-}0\\ 1&\hphantom{-}0&\hphantom{-}0&-1\\ 0&\hphantom{-}1&-1&\hphantom{-}0\\ 0&\hphantom{-}1&\hphantom{-}0&-1\\ 0&\hphantom{-}0&\hphantom{-}1&-1 \end{pmatrix}.$$ Then an arbitrary vector $\epsilon=(\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4)\in\Bbb{R}^4$ is mapped to $$\Delta\epsilon=(e_1-e_2,e_1-e_3,e_1-e_4,e_2-e_3,e_2-e_4,e_3-e_4).$$ The image of $\Delta$, which is the same as the support $\mathcal{S}$, is then the subspace of $\Bbb{R}^6$ defined by $$x_1-x_2+x_4=0,\qquad x_1-x_3+x_5=0,\qquad x_2-x_3+x_5=0.$$