Linear independence of $1, e^{it}, e^{2it}, \ldots, e^{nit}$

905 Views Asked by At

Definition: Let $C[a,b]$ be the set of continuous $\mathbb{C}$-valued functions on an interval $[a,b] \subseteq \mathbb{R}$ with $a < b$.

Claim: In $C[-\pi, \pi]$, the vectors $1, e^{it}, e^{2it}, \ldots, e^{nit}$ are linearly independent for each $n = 1,2, \ldots$

I'm having trouble understanding why this claim is true. I get that $C[-\pi, \pi]$ is a vector space, so the $e^{nit}$'s are vectors. But I don't get how to show these functions are linearly independent.

One approach I was thinking about was letting $x = e^{it}$. Then the list of vectors looks more like a list of polynomials: $1,x,x^2, \ldots, x^n$. I know these are linearly independent. But I'm not confident this is the correct way to think about it.

Reference: Garcia & Horn Linear Algebra e.g. 1.6.8.

4

There are 4 best solutions below

0
On

Suppose the functions

$e^{ikt}, \; 0 \le k \le n, \tag 1$

were linearly dependent over $\Bbb C$; then we would have

$a_k \in \Bbb C, \; 0 \le k \le n, \tag 2$

not all $0$, with

$\displaystyle \sum_0^n a_k e^{ikt} = 0; \tag 3$

we note that

$a_k \ne 0 \tag 4$

for at least one $k \ge 1$; otherwise (3) reduces to

$a_0 \cdot 1 = 0, \; a_0 \ne 0 \Longrightarrow 1 = 0, \tag 5$

an absurdity; we may thus assume further that

$a_n \ne 0; \tag 6$

also, we may write (3) as

$\displaystyle \sum_0^n a_k (e^{it})^k = 0; \tag 7$

but (7) is a polynomial of degree $n$ in the $e^{it}$; as such (by the fundamental theorem of algebra), it has at most $n$ distinct zeroes

$\mu_i \in \Bbb C, 1 \le i \le n; \tag 8$

this further implies that

$\forall t \in [-\pi, \pi], \; e^{it} \in \{\mu_1, \mu_2, \ldots, \mu_n \}, \tag 9$

that is, $e^{it}$ may only take values in the finite set of zeroes of (7); but this assertion is patently false, since $e^{it}$ passes through every unimodular complex number as $-\pi \to t \to \pi$, i.e., the range of $e^{it}$ is uncountable. This contradiction implies that (3) cannot bind, and hence that the $e^{ikt}$ are linearly independent over $\Bbb C$ on $[-\pi, \pi]$.

0
On

For the proof, we will employ Euler's formula:

$$ e^{i\theta} = \cos{(\theta)} + i\sin{(\theta)}$$

We proceed by induction.

Base case:

The base case where $n = 1$ follows easily, for if

$$c_0 + c_1e^{it} = 0$$ for all $t \in [-\pi,\pi]$

then for $ t = 0 $ and $t = \pi$, we have the following two equations:

$$ c_0 + c_1 = 0$$ $$ c_0 - c_1 = 0$$

which implies that

$$ c_0 = c_1 = 0 $$

Inductive case:

For the inductive case, suppose there are scalars $c_0, c_1, \dots, c_n$ such that

$$ c_0 + c_1e^{it} + \cdots c_ne^{nit} = 0$$ for all $t \in [-\pi,\pi]$.

Using Euler's formula and setting $t = 0$, we have

$$c_0 + c_1\sin{(0)} + \cdots + c_n\sin{(0)} = 0$$ so $$c_0 = 0$$

Thus,

$$ c_1e^{it} + c_2e^{2it} + \cdots + c_ne^{nit} = 0 $$

so we can factor out $e^{it}$ to get

$$ e^{it}(c_1 + c_2e^{it} + \cdots + c_ne^{(n-1)it}) = 0 $$

and since $e^{it} \ne 0$ for all $t$, this implies

$$c_1 + c_2e^{it} + \cdots + c_ne^{(n-1)it} = 0$$

in which case we employ the inductive hypothesis to get

$$ c_1 = c_2 = \cdots = c_n = 0 $$

and since $c_0 = 0$ as well, this ends the proof.

0
On

Suppose the contrary. Then there are $a_0, a_1, \cdots a_n$ that are nonzero and so that $$\sum_{k=0}^n a_k e^{ikt} = 0.$$ Consider the polynomial $$f(z) = \sum_{k=0}^n a_k z^k = 0.$$ This analytic function is mapping the unit circle to zero. Therefore, it must be the zero function. Contradiction.

0
On

Here's a technique using the definition of linear independence and some easy integration: Suppose $$\sum_{k = 0}^n a_k e^{k i t} = 0$$ for some $a_0, \ldots, a_n$. Integrating against $e^{-j i t}$ for $j \in \{0, \ldots, n\}$ gives $$0 = \int_0^{2 \pi} \left(\sum_{k = 0}^n a_k e^{k i t}\right) e^{-j i t} dt = \sum_{k = 0}^n a_k \int_0^{2 \pi} e^{(k - j) i t} dt = 2 \pi a_j,$$ so each $a_j$ is zero.