linear independence of a set of vectors is basis-independent

964 Views Asked by At

Please correct my thinking, if anything not make sense to you.

A vector in $R^n$ is nothing but an assemblage of its co-ordinate w.r.t. some basis in the form of $n \times 1$ matrix.

Statement: If set of vectors(w.r.t. some basis) is linearly independent, then it is also a linearly independent set w.r.t. any other basis.

Proof : Since coordinate of a vector w.r.t. some basis B1 can be transformed to coordinate of same vector w.r.t. some other basis B2 by a transformation T that takes basis B2 to basis B1, and which has the full rank.

Thus M1(coordinates of the set of vectors w.r.t. B1 arranged in matrix) $=$ T(matrix obtained by transformation) . M2 (coordinates of the set of vectors w.r.t. B2 arranged in matrix), from this we conclude rank M1 = rank M2.

Is my proof make sense ? Any suggestion will be appreciated.

2

There are 2 best solutions below

7
On

You needn't constrict your question to $\mathbb{R}^n$, this works in every finite-dimensional vector space $V$. Like said in the comments, every $n$-dimensional vector space $V$ is isomorphic to $\mathbb{R}^n$ by the coordinate mapping.

It might also help to make things more clear for both yourself and others to formulate your claim as "linear independence of a set of vectors is basis-independent".

Your method seems sound to me if you can also prove that $\text{rank}(TM_2)=\text{rank}(M_2)$.

Another nice way to prove that linear (in)dependence is basis-independent is by using the determinant. Remember that the determinant of an $n\times n$-matrix is non-zero iff the rows (and collumns) are linear-independent in $\mathbb{R}^n$. Also, remember the product rule for determinants and ask yourself what the determinant of $T$ will be (or rather - won't be).

Another way is the following:

Let $\{v_1,\dots,v_n \}$ be a linearly independent set of vectors in vector space $V$. Let $T:V\to V$ be a basis transformation. Then the question is if this implies $$\sum_{i=1}^n\lambda_i T(v_i)=0 \iff \forall i\leq n: \lambda_i=0.$$

Since $T$ is linear we have

$$\sum_{i=1}^n\lambda_i T(v_i)=T\Big(\sum_{i=1}^n\lambda_i v_i\Big)=0,$$ so $\sum_{i=1}^n\lambda_i v_i\in \text{ker}(T)$. Now, you should figure out what $\text{ker}(T)$ is and how this implies that $\forall i \leq n:\lambda_i=0$. This then proves that $\{T(v_1),\dots,T(v_n)\}$ is also linearly independent.

You might also want to change the title of your question to something less vague. I would have said this in a comment but I don't have enough reputation.

3
On

I believe you're looking from a wrong point of view.

A vector is independent from bases. It's an abstract object belonging to some vector space. As soon as you fix a basis, you can consider the coordinates of this vector with respect to the basis. If the vector space has dimension $n$, then the coordinate vector belongs to $\mathbb{R}^n$.

Your question can be reformulated as follows.

Suppose we have $\{v_1,v_2,\dots,v_k\}$, a set of vectors in the $n$-dimensional vector space $V$. If $\mathscr{B}$ is a basis of $V$, then we can consider the set of coordinate vectors $\{C_{\mathscr{B}}(v_1),C_{\mathscr{B}}(v_2),\dots,C_{\mathscr{B}}(v_k)\}$, which is a set of vectors in $\mathbb{R}^n$.

Then $\{v_1,v_2,\dots,v_k\}$ is linearly independent (in $V$) if and only if $\{C_{\mathscr{B}}(v_1),C_{\mathscr{B}}(v_2),\dots,C_{\mathscr{B}}(v_k)\}$ is linearly independent in $\mathbb{R}^n$.

The proof is simple: the map $C_{\mathscr{B}}\colon V\to\mathbb{R}^n$ that associates to each vector $v\in V$ its coordinate vector is linear and bijective.

If $\{v_1,v_2,\dots,v_k\}$ is linearly independent and $$ \alpha_1C_{\mathscr{B}}(v_1)+\alpha_2C_{\mathscr{B}}(v_2)+\dots+\alpha_kC_{\mathscr{B}}(v_k)=0 $$ then, by linearity, $$ C_{\mathscr{B}}(\alpha_1v_1+\alpha_2v_2+\dots+\alpha_kv_k)=0 $$ and, by injectivity, $\alpha_1v_1+\alpha_2v_2+\dots+\alpha_kv_k=0$. Since the set is linearly independent, we obtain $\alpha_1=\alpha_2=\dots=\alpha_k=0$.

The converse is similar.

How's the map $C_{\mathscr{B}}$ defined? If $\mathscr{B}=\{e_1,e_2,\dots,e_n\}$, then for $v\in V$ we have $$ C_{\mathscr{B}}(v)=(x_1,x_2,\dots,x_n) \text{ if and only if } v=x_1e_1+x_2e_2+\dots+x_ne_n $$ The proof that $C_{\mathscr{B}}$ is linear and bijective consists in applying the definitions.