Can you get a $\mathbb{C}$-basis of $\mathbb{C}^{n}$ from an $\mathbb{R}$-basis by picking one vector out of each of $n$ pairs?

134 Views Asked by At

Let $T = \{v_1, \dots, v_{2n}\} \subseteq \mathbb{C}^n$ be a $\mathbb{R}$-linearly independent set of vectors. Now consider the $2^n$ subsets $S \subseteq T$ of size $n$ which contain exactly one of $v_1$ and $v_{n+1}$, exactly one of $v_2$ and $v_{n+2}$ etc.
Question: Is one of these subsets necessarily a $\mathbb{C}$-basis?

This is equivalent to asking whether there exist $t_1, \dots, t_n \in \mathbb{Z}$ such that the vectors $v_1 + t_1 v_{n+1}, \:v_2 + t_2 v_{n+2}, \dots$ are $\mathbb{C}$-linearly independent.

Proof. Suppose such $t_i$ exist. Then $(v_1 + t_1 v_{n+1}) \wedge \dots \wedge (v_n + t_n v_{n+1})$ is a nonvanishing polynomial in $\underline{t} \in \mathbb{R}^n$, so one of its coeffcients must be nonzero. But each coefficient is the wedge of one of the sets $S$ as above.

Conversely, if you have a subset $S$ corresponding to replacing $v_j$ by $v_{n + j}$ for some $j \in A \subseteq \{1, \dots, n\}$ then you can set $t_j = 0$ for $j \not\in A$ and pick $t_j$ very large for $j \in A$. Since being linearly independent is an open condition, this will give linearly independent vectors $v_j + t_j v_{n+j}$. $\square$

2

There are 2 best solutions below

0
On BEST ANSWER

$\def\Span{\text{span}}\def\CC{\mathbb{C}}\def\RR{\mathbb{R}}$The answer is yes! This uses a nontrivial result of Edmonds, called the Matroid Intersection Theorem. Since I don't want to define a matroid, I'll just give the version of the theorem for realizable matroids, which can be stated in terms of configurations in a vector space.

To this end, let $V$ and $W$ be two vector spaces and let $v_1$, ..., $v_m$ and $w_1$, ..., $w_m$ be lists of vectors in $V$ and $W$, both indexed by the set $[m] := \{ 1,2,\ldots, m \}$. The Matroid Intersection Problem is to find $I \subseteq [m]$ as large as possible such that $\{ v_i \}_{i \in I}$ and $\{ w_i \}_{i \in I}$ are both linearly independent.

We can obtain an upper bound for the Matroid Intersection Problem as follows: Let $[m] = A \sqcup B$ be any partition of $[m]$ into two complementary sets. Then, for any $I$ such that $\{ v_i \}_{i \in I}$ and $\{ w_i \}_{i \in I}$ are both linearly independent, we have $$\#(I) = \#(I \cap A) + \#(I \cap B) = \dim \Span_{i \in I \cap A}(v_i) + \dim \Span_{i \in I \cap B}(w_i)$$ $$\leq \dim \Span_{a \in A}(v_a) + \dim \Span_{b \in B}(w_b).$$

The Matroid Intersection Theorem is that, for any $v_1$, ..., $v_m$ and $w_1$, ..., $w_m$, we can find a subset $I \subseteq [m]$ and a partition $[m] = A \sqcup B$ such that $\{ v_i \}_{i \in I}$ and $\{ w_i \}_{i \in I}$ are both linearly independent and $$\#(I) = \dim \Span_{a \in A}(v_a) + \dim \Span_{b \in B}(w_b).$$ In particular, this $I$ will be optimal for the Matroid Intersection Problem.


Now, let's address the original problem. Let $v_1$, $v_2$, ..., $v_{2n}$ be the original vectors in $\CC^n$. Let $w_1$, $w_2$, ...., $w_{2n}$ be the vectors $e_1$, $e_2$, ..., $e_n$, $\sqrt{-1} e_1$, $\sqrt{-1} e_2$, ..., $\sqrt{-1} e_n$, also in $\CC^n$. (Here $e_i$ is the $i$-th standard basis vector.)

We want to show that we can always find $I$ with $\#(I)=n$ such that $\{ v_i \}_{i \in I}$ and $\{ w_i \}_{i \in I}$ are both linearly independent.

Suppose that we can't. By the Matroid Intersection Theorem, there is then some partion $[2n] = A \sqcup B$ such that $$\dim \Span^{\CC}_{a \in A}(v_a) + \dim \Span^{\CC}_{b \in B}(w_b) < n.$$

Since the $v_a$ are linearly independent over $\RR$, we have $$\dim \Span^{\CC}_{a \in A}(v_a) \geq \tfrac{1}{2} \dim \Span^{\RR}_{a \in A}(v_a) = \tfrac{\#(A)}{2}.$$ Similarly, $$\dim \Span^{\CC}_{b \in B}(w_b) \geq \tfrac{\#(B)}{2}.$$ So $$\dim \Span^{\CC}_{a \in A}(v_a) + \dim \Span^{\CC}_{b \in B}(w_b) \geq \tfrac{\#(A)}{2} + \tfrac{\#(B)}{2} = \tfrac{2n}{2} = n.$$ This contradiction proves the result. $\square$

0
On

We may prove an equivalent proposition.

Proposition. Let $1\le m\le n$. Suppose that $$ A=\left[\begin{array}{cc|cc|cc|cc|cc|cc} v_1&w_1 & v_2&w_2 & \cdots&\cdots & v_{m-1}&w_{m-1} & v_m&w_m \end{array}\right]\in\mathbb C^{n\times2m} $$ has $\mathbb R$-linearly independent columns. Then it is always possible to select some $u_i\in\{v_i,w_i\}$ for each $i$, such that the $m$ selected vectors are $\mathbb C$-linearly independent.

To prove this, we need a technical lemma.

Lemma. Let $L\ge l\ge1$ and $s_l, s_{l+1},\ldots, s_L,\rho$ be any list of integers such that $s_i>i$ for every $i$. Then we may pick $r_i\in\{s_i,i\}$ for each $i\in\{l,\ldots,L\}$ such that $r_l,\ldots,r_L,\rho$ are distinct.

Proof of the lemma. We use mathematical induction on $L-l$. The base case $L-l=0$ is trivial. In the inductive step, suppose $L-l\ge1$. There are two possibilities:

  1. $\rho\ne l$. Let $r_l=l$. By induction hypothesis, for each $i\in\{l+1,\,\ldots,L\}$, we may pick $r_i\in\{s_i,i\}$ such that $r_{l+1},\ldots,r_L,\rho$ are distinct. Hence $r_l\,(=l),\,r_{l+1},\ldots,r_L,\rho$ are also distinct.
  2. $\rho=l$. Let $\rho'=r_l=s_l$. By induction hypothesis, for each $i\in\{l+1,\,\ldots,L\}$, we may pick $r_i\in\{s_i,i\}$ such that $r_{l+1},\ldots,r_L,\rho'$ are distinct. Hence $r_l\,(=\rho'=s_l),\,r_{l+1},\ldots,r_L,\rho\,(=l)$ are also distinct.

Proof of the proposition. We use mathematical induction on $m$, the number of pairs of vectors. The base case $m=1$ is trivial. In the inductive case where $m\ge2$, we may assume, by induction hypothesis and by flipping some $v_i$s with their counterpart $w_i$s if necessary, that $v_1,v_2,\ldots,v_{m-1}$ are already $\mathbb C$-linearly independent. Therefore, by a change of basis, we may assume that $v_i=e_i$ for all $i\le m-1$. Thus $$ A=\left[\begin{array}{cc|cc|cc|cc|cc|cc} e_1&w_1 & e_2&w_2 & \cdots&\cdots & e_{m-1}&w_{m-1} & x&y \end{array}\right] $$ where $x=v_m$ and $y=w_m$. We may now select the $m$ vectors we want using the following recurisive algorithm, which iterates from $k=m-1$ back to $k=1$.

  • Initialisation. Set $k=m-1$. Denote by $x(m:n)$ the subvector $(x^m,x^{m+1},\ldots,x^n)^T$ (where $x^i$ means the $i$-th element of $x$). If $x(m:n)\ne0$, we obtain a $\mathbb C$-linearly independent set $\{e_1,e_2,\ldots,e_{m-1},x\}$ and stop. Otherwise, do the similar for $y(m:n)$. If both subvectors are zero, go to step 1.
  • Step 1. Now $x((k+1):n)=y((k+1):n)=0$. Find an index $i\le k$ such that $w_i((k+1):n)\ne0$. Such $i$ must exist, otherwise for $\mathcal I=\{1,2,\ldots,k\}$ and $\mathcal J=\{1,\,2,\,\ldots,\,2k,\,2m-1,\,2m\}$, the submatrix $A(\{1,2,\ldots,n\}-\mathcal I,\,\mathcal J)$ would be zero and $$ \begin{aligned} 2k+2 &=|\mathcal J|\\ &=\operatorname{rank}_{\mathbb R}A(\,:\,,\mathcal J)\\ &=\operatorname{rank}_{\mathbb R}A(\mathcal I,\mathcal J)\\ &\le2\operatorname{rank}_{\mathbb C}A(\mathcal I,\mathcal J)\\ &\le2|\mathcal I|\\ &=2k,\\ \end{aligned} $$ which is a contradiction. By relabelling the first $k$ pairs of columns and the first $k$ rows of $A$ if necessary, we may assume that $w_k((k+1):n)\ne0$. Let $s_k$ be the row index for the lowest nonzero element of $w_k$. Hence $s_k>k$.
  • Step 2. If $x^k=y^k=0$, set $k\leftarrow k-1$ and go to step 1. Otherwise, by flipping $x$ and $y$ if necessary, we may assume that $x^k\ne0$. Then the row index of the lowest nonzero element of $x$ is $\rho=k$. From step 1 of this and the previous iterations, we have $s_i>i$ for each $i\in\{k,\ldots,m-1\}$. Therefore, by our lemma, we may pick $r_i\in\{s_i,i\}$ such that $r_k,\ldots,r_{m-1},\rho$ are distinct. Now for every $i\le k-1$, set $u_i=e_i$. For $k\le i\le m-1$, set $u_i=e_i$ if $r_i=i$, or $u_i=w_i$ otherwise. Set also $u_m-x$. Hence the integers $1,2,\ldots,k-1,r_k,\ldots,r_{m-1},\rho$ are distinct and they are the row indices of the lowest nonzero elements of $u_1,u_2,\ldots,u_m$ respectively. It follows that $\{u_1,u_2,\ldots,u_m\}$ is $\mathbb C$-linearly independent and we stop.

Note that the algorithm will always stop (and produce a desired selection of $m$ vectors) before reaching step 1 for iteration $k=1$. (If $m=2$, this means the algorithm will always stop at the initialisation phase.) Otherwise we would have $x(2:n)=y(2:n)=0$, meaning that $x$ and $y$ are (complex) scalar multiples of $e_1$. Yet this is impossible because $\{e_1,x,y\}$ is $\mathbb R$-linearly independent.