Diagonalization of an infinite matrix

2.1k Views Asked by At

Let A be an infinite matrix with all its first column elements equal to 1 and the rest of them equal to 0.

A=\begin{pmatrix} 1 & 0 & 0 & 0 & \cdots\\ 1 & 0 & 0 & 0 &\cdots\\ 1 & 0 & 0 & 0 & \cdots\\ \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}

Can A be diagonalized?

2

There are 2 best solutions below

4
On BEST ANSWER

Since you are in infinite dimensions, you would first need to specify in which space the operator $A$ is supposed to act, then you can try to prove that it fulfils the assumptions for the spectral theorem.

If we first look at the action of $A$ on an arbitrary sequence of real (I'm assuming that you are working in $\mathbb{R}$) numbers $a=(a_1, a_2,...)$ we see that $A(a) = (a_1, a_1, ...)$ which won't be e.g. in $l^2$, the natural Hilbert space of sequences.

Actually, $A$ might seem to only make sense in $l^\infty$ (but it doesn't, as @MartinArgerami points out, because we don't have a countable basis with which to interpret what the action of $A$ on an arbitrary vector $u \in l^\infty$ is), and this is definitely not Hilbert. Because we then lack the notion of a scalar product, we cannot define what orthogonal eigenspaces would be, hence no orthogonal diagonalisation.

Note however that we can formally find another "infinite matrix" $P$ such that $P^{-1}$ "exists" in some sense and $D = P A P^{-1}$ is a diagonal infinite matrix, namely

$$D = \left(\begin{array}{ccccc} 1 & & & & \\ 0 & 0 & & & \\ 0 & 0 & 0 & & \\ 0 & 0 & 0 & 0 & \\ \vdots & & & & \ddots \end{array}\right)$$

with

$$P^{- 1} = \left(\begin{array}{ccccc} 1 & & & & \\ 1 & 1 & & & \\ 1 & 0 & 1 & & \\ 1 & 0 & 0 & 1 & \\ \vdots & & & & \ddots \end{array}\right),\ \ P = \left(\begin{array}{ccccc} 1 & & & & \\ - 1 & 1 & & & \\ - 1 & 0 & 1 & & \\ - 1 & 0 & 0 & 1 & \\ \vdots & & & & \ddots \end{array}\right).$$

Edit: If you are wondering where those matrices came from, it was basically this: it is natural to see how $A$ acts on the canonical basis, and one immediately sees that $A(e_1)=u=(1,1,1,1,...)$ is an eigenvector with eigenvalue 1 and that $A(e_i)=0$ for all $i>1$, so $e_i$ are eigenvectors with eigenvalue 0. You want $P$, $P^{-1}$ such that $D=P A P^{-1}$, where $P^{-1}$ is a change from the "new" basis of eigenvectors into the "old", i.e. the matrix with columns $u, e_2, e_3, ...$ Compute its "inverse" $P$, see if $D$ is all zeros except in the first entry, and you are done. But again, this is all formal and quite wrong, since we don't have a basis to begin with. See Martin's answer for more.

0
On

The question is a big vague: how do you multiply arbitrary infinite matrices? You need some restrictions, that will affect when such a "matrix" is invertible (and you need that notion to talk about diagonalization). Or, if you express diagonalization as the existence of a basis of eigenvectors, you need to tells on which space does $A$ act.

Think of it this way: it is clear that one expects the spectrum of $A$ to be $\{0,1\}$. But, note that $A$ has no eigenvalue for $1$: to have $Ax=x$ for nonzero $x$, you would need $$ x=\begin{bmatrix}1\\1\\1\\ \vdots\end{bmatrix} $$ and then $Ax$ is not defined. And that's the problem: your "matrix" $A$ does not define an operator if you want to generalize the usual action of matrices on vectors.