Why is it so difficult to obtain the spectral properties related to infinite matrices, especially when they are not symmetric?

262 Views Asked by At

I believe most of the spectral theory is revolving around the bounded self-adjoint linear operators being analogous to real symmetric infinite matrices. Whereas, there are cases when the matrices are not real or symmetric. How can one find its eigenvalues, eigenvectors etc? And what information about the matrix can be obtained from the eigenvalues.

1

There are 1 best solutions below

3
On

Carleman-matrices, for instance, are of infinite size and systematically not symmetric, and some interesting cases are also complex. For instance, consider the map $$ f_m: x \to (1+x)^m - 1 $$ where we assume some fixed $m$. This has a Carlemanmatrix, say "$F_m$" (I always use them in the lower triangular form) and let's denote that relation as $$ F_m:: \qquad x \to (1+x)^m - 1 $$ Now the given map can be seen as a composition of maps $$ x \to \exp( m \cdot \log(1+x))-1$$ and each of that partial maps have their own Carlemanmatrix, say $$ \begin{array} {rl} S1:: & x \to \log(1+x) \\ D_m :: & x \to m \cdot x \\ S2:: & x \to \exp(x)-1 \\ \end{array}$$ Here the matrix $D_m$ is diagonal and the matrices $S1$ and $S2$ are triangular and are also their mutual inverses (just because $\log(1+x)$ and $\exp(x)-1$ are inverse maps).
By the construction we would write $$ F_m = S1 \cdot D_m \cdot S2 = S1 \cdot D_m \cdot S1^{-1} $$ which has the structure of a diagonalization, just like in the case of matrices of finite size.

The eigenvalues are the consecutive powers of $m$: $$D_m = \operatorname{diag}([1,m,m^2,m^3,...])$$ and the eigenvectors are the columns of the matrix $S1$, which is the matrix of Stirlingnumbers 1st kind, similarity-scaled by the factorials (see for instance Abramowitz&Stegun for this). ($S2$ of course is the matrix of Stirlingnumbers 2nd kind, in the same way similarity-scaled by factorials)

Note, that because $\exp(x)=\exp(x+k \cdot 2 \pi î) \qquad, k \in \mathbb N$ we can have multiple "versions" of $S1$ - just the Carlemanmatrices for $\log(1+x)+k \cdot 2 \pi î$ and so the construction of eigenvectors is not unique (we might then declare the given example as "principal solution").
I've asked here in MSE a question concerning another Carleman- and diagonalization-problem, see: Question on terminology on diagonalization


[late update]: I had posted an answer in mathoverflow which gives another interesting example for the ambiguity/non-intuitiveness of eigenvalues/-vectors of infinite-sized matrices. See here. It goes like this:

One simple example with a special matrix, which has somehow "a continuum" as eigenvalue...
Consider some function $ f(x) = K + ax + bx^2 + cx^3 + ... $ having a nonzero radius of convergence. Then think of the infinite matrix of the form $$ \small \begin{bmatrix} K & . & . & . & \cdots \\\ a & K & . & . & \cdots \\\ b & a & K & . & \cdots \\\ c & b & a & K & \cdots \\\ \vdots & \vdots & \vdots& \vdots & \ddots \end{bmatrix} $$ From the properties of finite matrices we would expect, that K is an eigenvalue. But consider a type of an infinite vector

$$ V(x) = [1,x,x^2,x^3,x^4,\ldots ] $$ with a scalar parameter $x$ from the range of convergence, then $$ V(x) \cdot F = f(x) \cdot V(x) $$ This means also: any vector $V(x)$ is an eigenvector of the matrix F and corresponds to the eigenvalue $f(x)$. If now $f(x)$ is entire, for instance the exponential function $ f(x)=\exp(x)$, then any value from the complex plane (except $0$ because $\exp(x)$ is never $0$) "is an eigenvalue" of F contradicting the "naive" extrapolation from the finite truncation of the matrix ...