A set of $n$ vectors $A_1,\dots, A_n$ in $n$-space is independent iff $d(A_1,\dots, A_n) \ne 0$.

63 Views Asked by At

I found a proof of this theorem in the book Multivariate Calculus VOL 2 by T M Apostol. But in that proof I can't understand one assertion. I paste the proof here and also bold that line:

Theorem 3.6 (P-83). A set of $n$ vectors $A_1,\dots, A_n$ in $n$-space is independent if and only if $d(A_1,\dots, A_n) \ne 0$.

Proof.(Only one direction) Assume that $A_1,\dots, A_n$ are independent. Let $V_n$ denotes the linear space of $n$-tuples of scalars. Since $A_1,\dots, A_n$ are $n$ independent elements in an $n$-dimensional space they form a basis for $V_n$. Therefore there is a linear transformation $T:V_n \to V_n$ which maps these $n$ vectors onto the unit coordinate vectors, $$T(A_k)=I_k ~ \text{for}~ k=1,\dots,n$$ Therefore there is an $n\times n$ matrix $B$ such that $$\bf{A_kB=I_k} ~ \text{for} ~k=1,\dots, n$$....Now the proof continues.....

But I cannot guess how Such a matrix $B$ exists satisfying that condition...!!!

Please help me to clarify this existence of $B$. Thank you.

2

There are 2 best solutions below

2
On BEST ANSWER

I've had a look at this chapter of Apostol's book, and I have to agree that, unless I'm missing something, this assertion is poorly justified on the basis of the theory developed to that point.

Apostol states the correspondence between linear mappings and matrices but, as far as I can tell, fails to spell out that the effect of the linear mapping on coordinates is given by matrix multiplication. This fact should have been noted explicitly in the later section that defines matrix multiplication.

There are two essential points to understand to justify this part of the proof of Theorem 3.6.

First, in the statement of Theorem 2.13, if we let $X = (x_1, \dots, x_n)$ and $Y = (y_1, \dots, y_m)$ be the coordinate representations, as column vectors, of $x$ and $y$ in the given bases, then equation $(2.13)$ can be written in the matrix form $Y = CX$, where $C = (t_{ik})$ is the matrix of $T$ relative to the given bases.

If we instead consider $X$ and $Y$ as row vectors, then we must write $Y = X C^t$. Here, $C^t$ denotes the transpose of $C$, a concept defined later on page 91. Since Apostol doesn't develop the properties of transposes, it's best to check the equivalence of this matrix equality with equation $(2.13)$ directly.

Second, according to Theorem 2.13, there is a matrix C associated with the linear mapping $T$, if the standard basis $(I_1,\dots,I_n)$ is selected in each of the two copies of $V_n$.

Then by Theorem 2.13, and remembering that $A_k$ and $I_k$ are considered row vectors in this chapter, we have $I_k = A_k C^t$ for each index $k$. This assertion relies on the fact that in the basis $(I_1,\dots,I_n)$, the coordinates of the vector $A_k$ are in fact the elements of $A_k$.

So we can take $B = C^t$.

7
On

If you look in your book in chapter 2.19

enter image description here

There is a theorem for inverses of square matrices.Now when you read the proof. The reasoning is somewhat circular however the idea actually follows from this.

If $$ A = U \Lambda U^{T} $$ then $$ det(A) = det(U \Lambda U^{T}) = det(U)det(\Lambda)det(U^{T}) $$ now the matrices $U,U^{T}$ are orthogonal have determinant one $$det(A) = det(\Lambda) = \prod_{i=1}^{n} \lambda_{i} $$

if the matrix is not linearly independent then one of the eigenvalues is zero. Then the product of the eigenvalues is zero.

enter image description here