so I'm trying to understand this proof, I follow up to
"To obtain a solution x"
Beyond this statement I am having difficulty following the logic, for instance where is j defined? Thanks for any help!
so I'm trying to understand this proof, I follow up to
"To obtain a solution x"
Beyond this statement I am having difficulty following the logic, for instance where is j defined? Thanks for any help!
On
OK, so I've came to a conclusion.
So let $x_i$ be any value for the ith zero column and then solve for the nonzero columns.
Let $j$ denote the non-zero columns obviously $x_j=0$ but the fact that there are zero columns allows x to be non-zero thus x is zero if and only if there is a leading 1 in every column.
It's not a clearly-worded proof, but the essence is, given a row-reduced matrix with an augmented zero column, we can freely and independently choose values for the variable corresponding to columns without leading $1$s, and the rest of the variables will be determined from this. Try this wording instead:
Consider a system of linear equations, corresponding to an $n \times m$ row-reduced matrix with a $0$ column augmented. Suppose $x_i$ is the $i$th variable of the system, corresponding to row $i$ in the matrix, for $i = 1, \ldots, m$. Let $k$ be such that $k$th column of the matrix is the leftmost column without a leading $1$.
We define a non-zero solution as follows: let $x_i = 0$ for $i > k$. Let $x_k = 1$. Note that the first $k - 1$ rows of the matrix have leading $1$s in the first, second third, ..., $(k - 1)$th columns, as the $k$th is the leftmost column without a leading $1$. The $(k - 1)$th row must correspond to an equation of the form $$x_{k - 1} + \alpha_k x_k + \alpha_{k + 1} x_{k + 1} + \ldots + \alpha_m x_m = 0.$$ But, $x_{k+1} = x_{k + 2} = \ldots = x_m = 0$ according to our construction, and $x_k = 1$, so we set $x_{k - 1} = -\alpha_k$.
Then, the $(k - 2)$th row is of the form $$x_{k - 2} + \beta_{k - 1} x_{k - 1} + \beta_k x_k + \beta_{k + 1} x_{k + 1} + \ldots + \beta_m x_m = 0,$$ and so we set $x_{k-2} = \beta_{k-1} \alpha_k - \beta_k$.
The actual value is unimportant; the important thing is that we can always set a value for every $x_i$ with $1 \le i < k$, in such a way that every equation corresponding to a row above the $k$th row is satisfied.
The rows below the $k$th row are easy; they'll be satisfied with $x_i = 0$ for $i > k$.
The point is, we have a solution where $x_k = 1$, which is a distinct solution from the trivial solution. In other words, the vectors $v_1, \ldots, v_m$ are linearly dependent, as there are non-trivial linear combinations of these vectors that produce the $0$ vector.