Linear independence and dependence of vectors

4.7k Views Asked by At

I am really stuck in this problem, I have only 2 days to learn matrix's base, and its generator. My problem is that I know definitions but I don't understand intuitively what they mean.

What I know: base = vectors which generate the matrix. generator = vectors which generate matrix, but there are more than enough (unlike in base)

But what is linear independence and dependence? Can you please give me trivial examples where I distinctly see the difference between them. Why dependence? What does it depend on? Why do I call (in)dependent?

Thanks in tons..

3

There are 3 best solutions below

3
On BEST ANSWER

Do you remember the stories of treasure maps? We were told that to get to the treasure we needed to take five steps North and three steps East. We can make this mathematical. Let $\vec{n}$ be a step North and $\vec{e}$ a vector East. Then taking five steps North and then three steps East can be written $5\vec{n}+3\vec{e}$. Going South is negative North and going West is negative East: three steps South and then one step West can be written $-3\vec{n}-\vec{e}$.

The vectors $\vec{n}$ and $\vec{e}$ span the treasure map because you can get to any point on the map by going some steps North and some steps East. (Including "negative steps".) In linear algebra speak, the vectors $\vec{n}$ and $\vec{e}$ span the treasure map. Moreover, they are linearly independent: there is no combination of North/South steps that can make up for an East/West step. You need both choices of direction to be able to reach all of the points on the map.

Let me introduce a new movement: a diagonal step $\vec{d}$. (Imagine this as a movement to the North-East direction.) Sure, $\vec{n}$, $\vec{e}$ and $\vec{d}$ still span the treasure map; $\vec{n}$ and $\vec{e}$ did that without $\vec{d}$'s help. However, these are not linearly independent. Adding $\vec{d}$ does not help me to get to points on the map that I couldn't have otherwise got to. For example, a diagonal step can be done by making a partial step to the North and a partial step to the East. Mathematically:

$$\vec{d} = \frac{1}{\sqrt{2}}\vec{n} + \frac{1}{\sqrt{2}}\vec{e} \, . $$

Generally: a set of vectors span a space if a mixture of them allows you to "get to" each point of the space. They are linearly independent if you need all of them to do that, i.e. taking one away (e.g. not being able to move North) will stop you doing this. They are linearly dependent if you have more possibilities than you need, e.g. having a diagonal step when a small North and small East step will do the job.

0
On

Example of linear dependence: If $\overrightarrow{a}=\left(1,1,1\right)$, $\overrightarrow{b}=\left(2,3,4\right)$ and $\overrightarrow{c}=\left(4,5,6\right)$, then $2\overrightarrow{a}+\overrightarrow{b}=\overrightarrow{c}$. One vector can be expressed as linear combination of others, so these three vectors are linearly dependent ("value of one vector depends on values of others").

Example of linear independence: $\overrightarrow{a}=\left(1,1,2\right)$, $\overrightarrow{b}=\left(2,3,4\right)$ and $\overrightarrow{c}=\left(4,5,6\right)$ - there no vector can be expressed as linear combination of other vectors.

Also... I suppose you meant basis matrix which is formed from linearly independent vectors, using them as columns. (See more in Wiki link, given by Sigur.)

2
On

My way to think about independence is as follows:

Given the set of vectors $S = \{a,b,c\}$ Then $S$ is linearly independent when: $ma + nb + qc = 0$ if and only if the coefficients $m, n, q$ are all zero.

In other words, one of vectors $a, b$, or $c$ can't be expressed as a linear combination of the remaining vectors.

Or if we want to say this in the matrix interpretation, independent means we only have trivial solution for the coefficients.

** A simple example is the standard basis of R3: Let $S = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}$

(1) Then, if I use the "number" interpretation, I should set $k1(1,0,0) + k2(0,1,0) + k3(0,0,1) = (0,0,0)$ where k1,k2,k3 are coefficients. Then by playing around with the system of equations, we see that $k1=k2=k3=0$ is the only way to make the above equation work

(2) In matrix notation, my matrix representation for the coefficients is a $3\times 3$ matrix, where the $1$'s indicate how many copies of coefficient $k1, k2$, or k3 is needed. The last column contains all 0's. This is exactly similar to what I said in (1), but it's only that we have a column vector of all zeros instead of a row vector of all zeros

$$\left[\begin{array}{ccc|c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{array}\right]$$

Clearly by reading off the rows, we see $(0,0,0)$ is the only valid solution, and this solution is trivial.

So if you want to know where does the "independence" really happen, I guess the the word implies that the coefficients have no relation with the entries of the vectors. I don't care what numbers are in the vectors $a,b,c$. As long as I can set them into a linear combination, set it equal to the zero vector, and get zero coefficients, then even if my vectors $a,b,c$ are something crazy like (chair, cat, house), I still can exact information.

I hope this helps.