Find the eigenvalues and eigenvectors of the linear transformation A defined on $\mathbb{C}^n$

121 Views Asked by At

Been working on this for a couple hours now and nothing I do seems to be working. Any help would be really appreciated.

Find the eigenvalues and eigenvectors of the linear transformation A defined on $\mathbb{C}$n by:

A(x1, x2, . . . , xn-1, xn) = (x2, x3, . . . , xn, x1).

2

There are 2 best solutions below

0
On

If λ is a non-zero e-value of A then we know that $λ(x_1, x_2, . . . , x_{n-1}, x_n) = (x_2, x_3, . . . , x_n, x_1)$. So $λx_1 = x_2 = \frac{x_3}{λ}$ = ... = $ \frac{x_n}{λ^{n-2}}$ =$\frac{x_1}{ λ^{n-1}}$. Assuming $x_1$ is not zero (if it were then so must the other $x_i$)we then get $λ^n$ = 1. So the e-values are the complex nth roots of 1. The e-vectors easily follow as $(1,λ,λ^2,...λ^{n-1})$.

1
On

A slightly different take on things:

Set

$\vec x = (x_1, x_2, \ldots, x_n); \tag{0}$

then we have

$A \vec x = A(x_1, x_2, \ldots, x_n) = (x_2, x_3, \ldots, x_n, x_1), \tag{1}$

and thus

$A^2 \vec x = A^2(x_1, x_2, \ldots, x_n) = (x_3, x_4, \ldots, x_n, x_1, x_2), \tag{2}$

and it is pretty easy to see that for $1 \le j \le n$,

$A^j \vec x = A^j(x_1, x_2, \ldots, x_n) = (x_{j + 1}, x_{j + 2}, \ldots, x_n, x_1, \ldots, x_{j - 1}, x_j), \tag{3}$

where, in (3), by $x_{n + i}$ we understand $x_i$ for $1 \le i \le n - 1$; with this notational convention, we note that the $k$-th component of $A^j\vec x$ is in fact $x_{j + k}$:

$(A^j\vec x)_k = x_{j + k}; \tag{4}$

continuing in this direction, we see that

$A^n(x_1, x_2, \ldots, x_n) = (x_1, x_2, \ldots, x_n) \tag{5}$

for any vector $(x_1, x_2, \ldots, x_n) \in \Bbb C^n$, leading directly to the conclusion that

$A^n = I. \tag{6}$

We pause to observe that we can in fact easily calculate $t(A) \vec x = t(A)( x_1, x _2, \ldots, x_n)$ for any $t(y) \in \Bbb C[y]$ such that $\deg t < n$; indeed, writing

$t(y) = \sum_0^m t_j y^j, \tag{7}$

with $t_j \in \Bbb C$, $0 \le j \le m < n$, we see that in fact, since

$(t_j A^j \vec x)_k = t_j(A^j \vec x)_k = t_j x_{j + k}, \tag{8}$

we thus have

$(t(A)\vec x)_k = ((\sum_0^m t_j A^j) \vec x)_k = \sum_{j = 0}^m t_j x_{j + k}; \tag{9}$

(9) shows that

$t(A) \vec x = (\sum_{j = 0}^m t_j x_{j + 1}, \sum_{j = 0}^m t_j x_{j + 2}, \ldots, \sum_{j = 0}^m t_j x_{j + n}). \tag{10}$

I now claim that $A$ satisfies no non-vanishing polynomial $0 \ne t(x) \in \Bbb C[x]$ with $\deg t < n$. For if there were such a $t(x)$, then $t(A) = 0$ and hence, taking $\vec x = (1, 0, \ldots, 0)$, that is, $\vec x_k = \delta_{1k}$, formula (10) gives rise to

$0 = t(A) \vec x = (t_0, 0, 0, \ldots, t_m, t_{m - 1}, \ldots, t_2, t_1), \tag{11}$

implying $t_j = 0$, $0 \le j \le m$ or $t(x) = 0$ identically.

Having established this claim we then show that $p_A(x) = x^n - 1$ is in fact the characteristic polynomial of $A$ by the usual argument based upon the division algorithm: we may write $p_A(x) = (x^n - 1)q(x) + r(x)$ where if $r(x) \ne 0$ we have $\deg r(x) < n = \deg (x^n - 1)$; then $0 = p_A(A) = (A^n - I)q(A) + r(A) = r(A)$, contradicting the just-demonstrated fact that $A$ satisfies no $t(x) \in \Bbb C[x]$ with $\deg t(x) < n$. Thus $r(x) = 0$ and $p_A(x) = (x^n - 1) q(x)$; but since $p_A(x)$ and $x^n - 1$ are both monic of degree $n$, we must have $q(x) = 1$ and thus $p_A(x) = x^n - 1$.

The characteristic polynomial of $A$, $p_A(x) = x^n - 1$; this in turn implies that the $n$ $n$-th roots of unity $1, \omega, \omega^2, \ldots, \omega^{n - 1}$, where $\omega = e^{2\pi i/n}$, are precisely the eigenvalues of $A$; the eigenvectors readily follow. Indeed, we see that

$A(1, 1, \ldots, 1) = (1, 1, \ldots, 1) = 1 \cdot (1, 1, \ldots, 1), \tag{12}$

so $(1, 1, \ldots, 1)$ is the eigenvector corresponding to $1$; likewise we see that

$A(1, \omega, \omega^2, \ldots, \omega^{n - 1}) = (\omega, \omega^2, \ldots, \omega^{n - 1}, 1) = \omega( 1, \omega, \omega^2, \ldots, \omega^{n - 1}) \tag{13}$

since $\omega^n = 1$. To continue:

$A(1, \omega^2, \omega^4, \ldots, \omega^{2n - 2}) = (\omega^2, \omega^4, \ldots, \omega^{2n - 4}, \omega^{2n - 2}, 1)$ $= \omega^2(1, \omega^2, \omega^4, \ldots, \omega^{2n - 4}, \omega^{2n - 2}), \tag{14}$

$A(1, \omega^3, \omega^6, \ldots, \omega^{3n - 3}) = (\omega^3, \omega^6, \ldots, \omega^{3n - 6}, \omega^{3n - 3}, 1)$ $= \omega^3(1, \omega^3, \omega^6, \ldots, \omega^{3n - 6}, \omega^{3n - 3}), \tag{15}$

and in general, for $0 \le k < n$,

$A(1, \omega^k, \omega^{2k}, \ldots, \omega^{nk - 2k}, \omega^{nk - k}) = (\omega^k, \omega^{2k}, \ldots, \omega^{nk - 2k}, \omega^{nk - k}, 1)$ $= \omega^k(1, \omega^k, \omega^{2k}, \ldots, \omega^{nk - 2k}, \omega^{nk - k}); \tag{16}$

formulas (14)-(16) all use the fact that $(\omega^k)^n = \omega^{nk} = 1$; we note the vectors $(1, \omega^k, \omega^{2k}, \ldots, \omega^{nk - 2k}, \omega^{nk - k})$, $0 \le k < n$, are linearly independent by virtue of the fact that the corresponding eigenvalues $\omega^k$ are distinct. Thus we have exhibited a complete set of eigenvalues and eigenvectors for $A$.

All this without ever explicitly presenting the matrix representation of $A$! Which is, incidentally,

$A = \begin{bmatrix} 0 & 1 & 0 & \ldots & 0 \\ 0 & 0 & 1 & 0 & \ldots & 0 \\ \vdots \\ 1 & 0 & \ldots & 0 \end{bmatrix}. \tag{17}$

Hope this helps! Cheerio,

and as always,

Fiat Lux!!!