Generalization of the Jordan form for infinite matrices

1.2k Views Asked by At

Under what conditions is it the case that for a matrix $M$ whose rows and columns are indexed by a countably infinite set $S$ one has a Hamel basis consisting of generalized eigenvectors (i.e. $v \in \ker(M - \lambda I)^n$) of $M$? Must $M$ be a compact operator (I have a norm)?

The matrix I am working with has non-negative entries, row sums not exceeding $1$ (substochastic), is irreducible and aperiodic. However, I suspect this question may be of general interest to others, so any solution not employing these properties would be all the more useful.

EDIT

Here is some more information: the matrix $M$ which I am working with is $R$-positive. This means that none of the sequences $\{ M^n_{ij}\}_{n \in \mathbb{N}}$, $i,j \in S$, converge to $0$, where $$ R^{-1} := \lim_{n \to \infty} (M_{ij}^n)^{1/n}. $$ In such a case, it is known that $R^{-1}$ is the spectral radius of $M$, and moreover that $R^{-1}$ is an eigenvalue for $M$ for which there are unique left and right eigenvectors $\alpha,\beta$ which are strictly positive and satisfy $$ \sum_{k \in S} \alpha(k) \beta(k) < \infty. $$ In particular the set of eigenvalues for $M$ cannot be empty.

Thanks!

2

There are 2 best solutions below

10
On

Let $H$ be a countably-dimensional Hilbert space with basis $\{e_n\}$. Let $T$ be the weighted shift given by $$ Te_n=\frac1n\,e_{n+1}. $$ This operator is compact (actually, Hilbert-Schmidt).

If $Tv=\lambda v$, with $v=\sum_n\alpha_n e_n$, then $$ \sum_{n=1}^\infty\alpha_n\lambda e_n=\sum_{n=1}^\infty\alpha_n Te_n=\sum_{n=1}^\infty\alpha_n\,\frac1n\,e_{n+1}=\sum_{n=2}^\infty\frac{\alpha_{n-1}}{n-1}\,e_n. $$ If $\lambda=0$, we deduce that $\alpha_n=0$ for all $n$, so $v=0$. If $\lambda\ne0$, then $\alpha_1=0$, and $\alpha_{n+1}=\alpha_n/n$, implying again that $\alpha_n=0$ for all $n$. So $v=0$. This shows that $T-\lambda I$ has trivial kernel for all $\lambda$.

With a similar idea it is easy to show that $\ker(T-\lambda I)^n$ is trivial for all $n$. So $T$ has no nonzero generalized eigenvectors, and it cannot have a Jordan form, at least in the obvious sense.

0
On

I am not sure if this answer is useful in your case.

Theorem: Exists a basis formed by generalized eigenvectors of $T:V\rightarrow V$, even if $V$ is an infinite dimensional vector space over a field $F$, if we assume the existence of a polynomial $p(x)\in F[x]$ with all roots in $F$ such that $p(T)=0$.

Of course, we need to prove first for nilpotent operators. Then we must use the Primary decomposition theorem to extend for arbitrary operators satisfying this hypothesis. Notice that this hypothesis is always true in finite dimension by Cayley-Hamilton's theorem, if $F=\mathbb{C}$. The proof is almost the same of the finite dimensional case.

The proof for nilpotent operators is an induction on the nilpotency index $($the smallest $k$ such that $T^k=0)$ which does not depend on the dimension of $V$.

Let us prove for nilpotent operators $($i.e. when $p(x)=x^k)$.

Proof: Let $T:V\rightarrow V$ be a nilpotent operator. Let $k$ be the nilpotency index. If $k=1$ the theorem is trivial. Suppose $k>1$.

Since $k>1$ then $T\neq0$ and $\Im(T)\neq 0$. Define $T':\Im(T)\rightarrow\Im(T)$, such that $T'(x)=T(x)$.

The nilpotency index of $T'$ is smaller than the index of $T$.Thus by induction hypothesis exists a basis $\alpha$ of $\Im(T)$ such that

  1. $\alpha=\cup_{i\in I}\alpha_i$
  2. $\alpha_i=\{v_1^i,\ldots,v_{s_i}^i\}$, $s_i<k$
  3. $T(v_l^i)=v_{l-1}^{i}$ for $1<l\leq s_i$ and $T(v_1^i)=0$

Next for each $v_{s_i}\in\alpha_i\subset\Im(T)$, choose $v_{s_{i+1}}^i$ such that $T(v_{s_{i+1}}^i)=v_{s_{i}}^i$.

Consider the following preimage of $\alpha$: $\cup_{i\in I}\beta_i$, where $\beta_i=\{v_2^i,\ldots,v_{s_i}^i,v_{s_{i+1}}^i\}$ Now, let $W$ be a subspace of $V$ generated by $\cup_{i\in I}\beta_i$. Notice that $\ker(T)\oplus W=V$ and $\cup_{i\in I}\beta_i$ is a basis of $W$.

Now, $\{v_1^i,i\in I\}$ is a basis of $\ker(T)\cap\Im(T)$. (It is straightfoward)

Let $R$ be a subspace of $\ker(T)$ such that $R\oplus (\ker(T)\cap\Im(T))=\ker(T)$. Let $\{r_j, j\in J\}$ be a basis of $R$.

Finally the required basis of $V=\ker(T)\oplus W=R\oplus (\ker(T)\cap\Im(T))\oplus W$ is $$\{r_j, j\in J\}\cup \{v_1^i,i\in I\}\cup (\cup_{i\in I}\{v_2^i,\ldots,v_{s_i}^i,v_{s_{i+1}}^i\})=$$ $$=\{r_j, j\in J\}\cup (\cup_{i\in I}\{v_1^i, v_2^i,\ldots,v_{s_i}^i,v_{s_{i+1}}^i\}).$$ $\square$