Why is 1, x and $x^2$ linearly independent?

10.7k Views Asked by At

We say that (1,x,$x^2$) span the set of polynomials of degree 2? But why do we say they are linearly independent?

How do you define linear independence of functions like $f(x) = x^2$ and g(x) = x?

Is linear dependent defined as being able to construct

$a_1*f(x) + a_2*g(x) = 0 $ for some non trivial $\{a_i\}$ for all x in the domains of the functions?

What would an orthogonal basis of functions be? are x and 1 orthogonal?

5

There are 5 best solutions below

0
On

When it comes to linear algebra, beware what your elements are.

When writing about $1,X,X^2$ you actually mean the associated polynomial functions. Thus your $0$ is the null function which is defined as associating $0$ to any $x$. So saying your linear combination is the null function (ie. the polynomials you consider are independent) indeed means that this linear combination should be equal to zero for any $x$.

As vadim123 pointed out, orthogonality depends on the inner product you use.

0
On

Yes, we say $f(x)=x$ and $g(x)=x^2$ are linearly independent over $\mathbb R$,

because if some linear combination of them is the zero function, then the coefficients are zero.

In symbols, if $af(x)+bg(x)=ax+bx^2=0$ (for all $x$), then $a=b=0$.

In order to show that $ax+bx^2=0$ for all $x$ implies $a=b=0$,

you could take $x=1$ and $x=2$, because if $a+b=0$ and $2a+4b=0$ then $a=b=0$.

0
On

There are two ways to look at it, and one of them (the function space view) sees the polynomials as functions of a special type on some domain.

So the $a_0\cdot 1 + c_1 \cdot x + c_2\cdot x^2 = 0$ means that the left hand function in $x$ is identically $0$, i.e. $0$ for all values of $x$.

On the reals we can e.g. substitute $x=0$ and conclude $c_0=0$ that way. Then taking the derivative on both sides we get $c_1 + 2c_2x =0$, which should still hold for all $x$; we can use $x=0$ again to conclude $c_1=0$ and another derivative step to see $c_2=0$ too.

Or avoiding derivatives, we can take $x=1$ and $x=2$ to get the system (knowing $c_0=0$ from $x=0$ already) that $c_1+c_2=0$ and $2c_1 + 4c_2=0$ which we can see (using usual elimination) only has $c_1=c_2=0$ as solutions too.

A set of functions $F$ is called linearly independent iff

for all finite $n$ and $f_1,\ldots f_n \in F$, whenever we have that $\sum_{i=1}^n c_i f_i =0$ (as functions), with $c_i$ scalars (from $\mathbb{Q}, \mathbb{R}, \mathbb{C}$ e.g.) we can conclude $\forall 1 \le i \le n: c_i=0$.

0
On

"Linear independence" depends on what kind of math you are doing.

There is a concept of linearly dependent functions that says a set of real-number functions is linearly dependent if you can make a linear combination of the functions with real-number coefficients that results in a function that is always zero, yet the coefficients you used in your linear combination were not all zero.

For example, given $f(x) = x$ and $g(x) = 2x$ you can write $$ h(x) = 2 f(x) + (-1) g(x) $$ and then $h(x) = 0$ for all $x$. But you got it by multiplying $f$ and $g$ respectively by non-zero numbers and adding them. Therefore $f$ and $g$ are linearly dependent.

A linear combination of $x$ and $x^2$ would look like this: $$ h(x) = c_1 x + c_2 x^2 $$ where $c_1$ and $c_2$ are whatever constant real numbers you choose. Can you choose $c_1$ and $c_2$ so they are not both zero, yet $h(x) = 0$ for all $x$? If not, $x$ and $x^2$ are not linearly dependent functions according to the definition above.

If you have this question because of a book or course you are studying, then the kind of linear dependence you are supposed to consider should have been defined already. Apply that definition.

2
On

Linear independence here says as follows: there do not exist scalars $c_0, c_1, c_2$, not all zero, such that the polynomial $$ c_0 + c_1 x + c_2 x^2 $$ is identically zero.

The linear independence of the functions $x$ and $x^2$ is defined the same way: if $c_1, c_2$ are not all zero, then the linear combination $$ c_1 x + c_2 x^2 $$ is not the zero polynomial.

A key question here is, Why can't two polynomials of different degree be identically equal? To answer this, start taking their derivatives: $$ 1, 2x $$ $$ 0, 2. $$ As you can see, the 2nd order derivatives are distinct. If the two polynomials were identical, the derivatives of all orders would be identical.

As for orthogonality, this concept requires that, first, we have a scalar product on our vector space. For functions, such as your polynomials, this article describes how the scalar product is defined and has a section specifically on polynomials.

If you find yourself confused on these, it's because your coursework has not included enough exercises. To compensate for that (and you should, otherwise this will keep getting in your way), use Halmos's Finite-dimensional vector spaces and,to supplement, his Linear algebra problem book.