Below theorem is in book by: Steven H. Weintraub, titled: Linear Algebra for the Young Mathematician, in Chap. $1$, page $16, 17$.
I am unable to understand some points in that, stated at the end.
Theorem $1.4.5$ (Basic counting theorem). A homogeneous system of $m$ linear
equations in $n$ unknowns with $n \gt m$ always has a nontrivial solution.
Proof.
It is convenient to introduce the following language. We will say that a
linear equation $c_1 x_1 +c_2 x_2 +···+c_n x_n = 0$ involves $x_i$ if the coefficient
$c_i$ of $x_i$ in this equation is nonzero.
We prove the theorem by induction on the number of equations $m$.
We begin with the case $m = 1$. So suppose we have a single homogeneous
equation in $n \gt 1$ unknowns
$a_{11} x_1 + a_{12} x_2 + ··· + a_{1n} x_n = 0.$
There are two possibilities: either this equation involves $x_1$ or it doesn’t.
If it doesn’t involve $x_1$, it has the nontrivial solution
$x_1 = 1, x_2 = x_3 = ··· = x_n = 0.$If it does involve $x_1$, it has the nontrivial solution
$x_1 = −\frac{a_{12}}{a_{11}} , x_2 = 1, x_3 = ··· = x_n = 0.$
Now suppose the theorem is true for all homogeneous systems of $m$ equations
in $n$ unknowns with $n \gt m$, and consider a homogeneous system of $m+1$ equations
in $n$ unknowns with $n \gt m + 1$.
For simplicity of notation, set $\overline{m} = m + 1$. Thus
we have a system
$${(*)}\begin{cases}
a_{11} x_1 + ··· + a_{1n} x_n = 0\\[2ex]
.\\[2ex]
.\\[2ex]
.\\[2ex]
a_{\overline{m}1} x_1 + ··· + a_{\overline{m}n} x_n = 0.
\end{cases}
$$
There are two possibilities: either no equation involves $x_1$ or at least one equation does.
If no equation involves $x_1$, this system has the nontrivial solution
$x_1 = 1, x_2 = · · · = x_n = 0$.
Suppose at least one equation does involve $x_1$. Choose any such equation,
say equation $i$. Now for each $j\ne i$, add $-\frac{a_{j1}}{a_{i1}}$ times equation $i$ to
equation $j$. If equation $j$ does not involve $x_1$, this is doing nothing, as then
$a_{j1} = 0$ so $-\frac{a_{j1}}{a_{i1}} = 0$. But if equation $j$ does involve $x_1$, this is
certainly doing something.
To be precise, in the resulting equation
$a^{′}_{j1}x_1 + a^{′}_{j2}x_2 + · · · + a^{′}_{jn}x_n = 0$
we have
$$a^{′}_{j1} = a_{j1} + (-\frac{a_{j1}}{a_{i1}})a_{i1} = 0$$
Thus in system $(∗^{′})$, none of the equations involves $x_1$, except for equation $i$.
In other words, system $(∗^{′})$ is a system of the form
$${(*^{′})}\begin{cases}
a^{′}_{12} x_2 + ··· + a^{′}_{1n} x_n = 0\\[2ex]
a^{′}_{22} x_2 + ··· + a^{′}_{2n} x_n = 0\\[2ex]
.\\[2ex]
.\\[2ex]
.\\[2ex]
a_{i1} x_1 + a_{i2} x_2 +··· + a_{in} x_n = 0
.\\[2ex]
.\\[2ex]
.\\[2ex]
a^{′}_{\overline{m}2} x_2 + ··· + a^{′}_{\overline{m}n} x_n = 0.
\end{cases}
$$
But now notice that, omitting equation $i$, we have a homogeneous system of
$\overline{m} - 1$ equations in the unknowns $x_2, . . . , x_n$, i.e., a homogeneous system of $m$
equations in the $n − 1$ unknowns $x_2, . . . , x_n$. Since $n \gt m + 1$, $n − 1 \gt m$, so by the
inductive hypothesis this system has a nontrivial solution
$$x_2 = s_2, · · ·, x_n = s_n.$$
Then we see that the system $(∗^{′})$ has a nontrivial solution
$x_1 = s_1 = (−\frac{1}{a_{i1}})(a_{i2}s_2 + a_{i3}s_3 + · · · + a_{in}s_n), x_2 = s_2, . . . , x_n = s_n.$
(This is nontrivial since not all of $s_2, . . . , s_n$ are zero, so, whether or not $s_1 = 0$,
not all of $s_1, . . . , s_n$ are zero.)
But we saw in Lemma 1.4.1 that adding a multiple of one equation to another
does not change the solutions. We obtained $(∗^{′})$ from $(∗)$ by a sequence of these
steps. Thus $(∗)$ and $(∗^{′})$ have the same solutions. In particular, our nontrivial
solution to $(∗^{′})$ is a nontrivial solution to $(∗)$.
Thus $(∗)$ has a nontrivial solution, completing the inductive step.
Hence, by induction, we conclude the theorem is true for every positive integer $m$.
Doubts:
My doubts are based on the single homogeneous equation in $n\gt 1$ unknowns, with $x_i, i \in \lbrace 1, \ldots, n\rbrace$:
$$\begin{bmatrix} c_1&c_2&\ldots&c_n \end{bmatrix}\begin{bmatrix} x_1\\ x_2\\ \vdots\\ x_n \end{bmatrix}$$
Am very confused as what representation should choose, as the above one with $c_i$ as a row vector and $x_i$ as a column vector do not seem good enough.
Also, due to single equation involved, limited scope of proof above using back-substitution should be there.
A)
1. If it doesn’t involve $x_1$ it has the nontrivial solution
$\ \ \ x_1 = 1, x_2 = x_3 = ··· = x_n = 0.$
As there is only one equation involved, so only one coefficient among $x_i$ can be found.
Unable to understand how it is true logically that if solution does not involve $x_1$,
then it has the given value of $x_i=0, i \in \lbrace 2,\dots, n\rbrace$.
I mean that why $x_1 \ne 0$, if $x_1$ is not involved.
B)
2. If it does involve $x_1$ it has the nontrivial solution
$x_1 = −\frac{a_{12}}{a_{11}} , x_2 = 1, x_3 = ··· = x_n = 0.$
Unable to understand how only $x_1, x_2$ are used to find answer by back-substitution in row-echelon form of the system of equations. Is it not possible that instead of $x_2$, any other among the values $x_3,\ldots, x_n$ has non-zero value?
Your whole goal is to construct a non-trivial solution, different people can construct different non-trivial solution.
Your book decided to consider two case, that is if $x_1=0$ or $x_1 \ne 0$. If you prefer, you can consider cases by splitting cases on $x_2$ instead rather than $x_1$.
$$\begin{bmatrix} 0 &c_2&\ldots&c_n \end{bmatrix}\begin{bmatrix} x_1\\ x_2\\ \vdots\\ x_n \end{bmatrix}=0$$
If we let $x_2=\ldots=x_n=0$, we have
$$\begin{bmatrix} 0 &c_2&\ldots&c_n \end{bmatrix}\begin{bmatrix} x_1\\ 0\\ \vdots\\ 0 \end{bmatrix}=0$$ which is true regardless of which $x_1$ we pick since $0\cdot x_1 = 0$.
We want to intentionalal pick $x_1 \ne 0$ so that we get our non-trivial solution.