Suppose I have a system of linear difference equations
$$ \mathbf{x}_{n+1} = A \mathbf{x}_n \>.$$
If $A$ is diagonalizable, then it can be shown that the system asymptotically approaches $\vec{0}$ if all the eigenvalues are less than 1 in modulus.
Suppose $A$ is not diagonalizable. Then under what conditions does the system decay to $\vec{0}$? How can this be shown?
The same result applies:
An easy proof follows from looking at the Jordan decomposition $A = Q^{-1}JQ$ of $A$. Another proof goes as follows:
The spectral radius $\rho(A)$ of $A$ has the following important property:
Note that the norm depends on both $A$ and $\epsilon$. See How to prove that the spectral radius of a linear operator is the infimum over all subordinate norms of the corresponding norm of the operator. for a proof.
Let $\epsilon = \frac{1}{2}(1 -\rho(A))$ and pick the corresponding norms. Using this vector norm and induced matrix norm we have $$\|A\| = \sup_{\mathbf{x} \in \mathbb{R}^n \setminus \{0\}} \frac{\|A\mathbf{x}\|}{\|\mathbf{x}\|}$$
Therefore for any $\mathbf{x} \in \mathbb{R}^n$, we have $$\|A\mathbf{x}\| \leq \|A\|\|\mathbf{x}\|$$
Using the relation $\mathbf{x}_{n+1} = A\mathbf{x}_n$, we have $$\|\mathbf{x}_{n+1}\| \leq \|A\|\|\mathbf{x}_n\| \leq (\rho(A) + \epsilon)\|\mathbf{x}_n\| = \frac{1}{2}(1+\rho(A))\|\mathbf{x}_n\|$$
Hence if $\rho(A) < 1$, we have $\|\mathbf{x}_n\| \rightarrow 0$. On the other hand, if $\rho(A) \geq 1$, then we can pick the eigenvector corresponding to $\lambda = \rho(A)$ and let it to be the initial guess to see it does not converge to $0$.