Proof in infinite-dimensional space

74 Views Asked by At

Let $a \neq b\neq c \neq a$ be distinct real numbers, and let $f\colon E \to E$ be an endomorphism of a real vector space $E$ such that $$ (f − aI)(f − bI)(f − cI) = 0. $$ Show that $$ E = \ker(f − aI) \oplus \ker(f − bI) \oplus \ker(f − cI). $$ I'm able to do this with the theory of minimal polynomial and diagonzalizable transformations (it's clear in that case that the minimal polynomial will have just linear factors so the the transformation will be diagonalizable and the statement will be true). However, it works only if $E$ is finite dimensional.

If $E$ is infinite dimensional, what I would do to prove this?

1

There are 1 best solutions below

0
On BEST ANSWER

Hints

  • Find $\ \alpha,\beta\ $ and $\ \gamma\ $ such that $$\alpha(f-bI)(f-cI)+\beta(f-aI)(f-cI)+\gamma(f-aI)(f-bI)=I$$ Expanding the left side of the above equation and equating the coefficients $\ f^2\ $ and $\ f\ $ to $\ 0\ $, and the coefficient of $\ I\ $ to $\ 1\ $ gives you three linear equations for $\ \alpha,\beta\ $ and $\ \gamma\ $, with coefficients that are functions of $\ a, b\ $ and $\ c\ $. The conditions you're given on $\ a, b\ $ and $\ c\ $ guarantee that these equations have a unique solution.

$\ \alpha=\frac{1}{(a-b)(a-c)},\ \beta=\frac{1}{(b-a)(b-c)},\ \gamma=\frac{1}{(c-a)(c-b)}\ $

  • Once you've got $\ \alpha,\beta\ $ and $\ \gamma\ $, you can write any $\ v\in E\ $ as $$v=v_a+v_b+v_c\ ,$$ where \begin{align} v_a&=\alpha(f-bI)(f-cI)v\in\ker(f-aI)\\ v_b&=\beta(f-aI)(f-cI)v\in\ker(f-bI)\\ v_c&=\gamma(f-aI)(f-bI)v\in\ker(f-cI) \end{align}
  • If $\ K_1,K_2\ $ are any two of $\ \ker(f-aI), \ker(f-bI)\ $ and $\ \ker(f-cI)\ $, it's easy to show that $\ K_1\cap K_2=\{0\}\ $ because $\ 0=(f-xI)v=(f-yI)v\ $ implies that $\ (x-y)v=0\ $, and if $\ x\ne y\ $, this implies that $\ v=0\ $.