Find eigenvectors of an infinite dimensional vectorspace Pn(R)?

944 Views Asked by At

Define a linear operator T:Pn(R)--->Pn(R) by T(p(x))=p(x)+p(2)x.

(a) How many distinct eigenvalues does T have?

(b) What are the dimensions of their corresponding eigenspaces?


At first I started with writing the matrix for T in terms of the standard bases and I got a big matrix that I think it will be impossible to find its characteristic polynomial. the matrix I got is in the link : https://i.stack.imgur.com/iJoVT.jpg

so that didn't help. Then, I tried to use the definition of an eigenvalue. T(x)=Lambda x
so I let x= an(x^n)+.....+a0 . with (an) doesn't equal 0 (because x has to be non zero) then: since T(x)=lambda x
so, [an(x^n)+an(2^n)x]+.......[a0+a0(x)] = lambda an(x^n)+......lambda a0

My, intention was to cancel things out but I cannot see how this is going to work.

any suggestions on how to approach this problem? remember that the question is How many distinct eigenvalues and what are the dimensions of their eigenspaces, so if there is way to get that without finding the eigvecs and the eigvalues then thats probably easier. Thank you in advance!

Edit: all of my a0's should have been a1's. because if it was a0 the dimension of the vectorspace would be n+1; however its just n

2

There are 2 best solutions below

3
On BEST ANSWER

Hint: you should be able to show that there is a eigenspace of dimension $n$ for eigenvalue $\lambda = 1$ (think about what has to happen for $T(p(x)) = p(x)$ -- try to describe all such $p(x)$). Then find some other eigenvector for a different eigenvalue.

Expanding to a more complete solution:

$T(p(x)) = p(x)$ if and only if $p(2) = 0$. Notice that the polynomials $p_k(x) = x^k - 2^k$ are linearly independent for different $k$ and have $p_k(2) = 0$, so this gives us an eigenspace of dimension $n$ (since $p_k(x)$ for $1\leq k\leq n$ gives $n$ linearly independent eigenvectors) for the eigenvalue $\lambda = 1$. On the other hand, for $p(x) = x$ we have $T(p(x)) = x + 2x = 3p(x)$; this gives us an eigenspace of dimension $1$ for the eigenvalue $\lambda = 3$. Since this exhausts the whole space ($P_n(\mathbb{R})$ has dimension $n+1$) we are done, and these are the full eigenspaces.

By the way, this actually works for the infinite dimensional space of all polynomials, though one does have to check that the polynomials above in fact span the whole space rather than just relying on a dimension argument, but this isn't hard.

(Edited to reflect the clarification that $P_n(\mathbb{R})$ is indeed supposed to be the polynomials over $\mathbb{R}$ of degree at most $n$.)

4
On

first of all notice that {1-(1/2)x,x,x^2-2x,x^3-4x,x^4-8x,.....,x^(n-1)-2^(n-2)x} is a basis of Pn(R). T(1-(1/2)x)=1-(1/2)x T(x^2-2x)=x^2-2x T(x^3-4x)=x^3-4x . . . T(x^(n-1)-2^n-2x)=x^(n-1)-2^n-2x & T(x)=3x So, 1 & 3 are eigen values of T with multiplicity n-1, 1 resp.

Am I right??