What happens to a system of difference equations when A is non-diagonalizable?

244 Views Asked by At

Suppose I have a system of linear difference equations

$$ \mathbf{x}_{n+1} = A \mathbf{x}_n \>.$$

If $A$ is diagonalizable, then it can be shown that the system asymptotically approaches $\vec{0}$ if all the eigenvalues are less than 1 in modulus.

Suppose $A$ is not diagonalizable. Then under what conditions does the system decay to $\vec{0}$? How can this be shown?

2

There are 2 best solutions below

3
On BEST ANSWER

The same result applies:

The sequence $\mathbf{x}_n$, irrespective of the initial guess, converges to $0$ if and only if the matrix $A$ has its spectral radius $\rho(A) = \max\{\lambda_i: i = 1, 2, > \ldots n\}< 1$.

An easy proof follows from looking at the Jordan decomposition $A = Q^{-1}JQ$ of $A$. Another proof goes as follows:

The spectral radius $\rho(A)$ of $A$ has the following important property:

For any $\epsilon > 0$ and $A \in \mathbb{M}_{n \times n}(\mathbb{C})$, there exists an induced norm $\|\cdot\|_{A,\epsilon}$ such that $\rho(A) \geq \|A\|_{A,\epsilon} - \epsilon$.

Note that the norm depends on both $A$ and $\epsilon$. See How to prove that the spectral radius of a linear operator is the infimum over all subordinate norms of the corresponding norm of the operator. for a proof.

Let $\epsilon = \frac{1}{2}(1 -\rho(A))$ and pick the corresponding norms. Using this vector norm and induced matrix norm we have $$\|A\| = \sup_{\mathbf{x} \in \mathbb{R}^n \setminus \{0\}} \frac{\|A\mathbf{x}\|}{\|\mathbf{x}\|}$$

Therefore for any $\mathbf{x} \in \mathbb{R}^n$, we have $$\|A\mathbf{x}\| \leq \|A\|\|\mathbf{x}\|$$

Using the relation $\mathbf{x}_{n+1} = A\mathbf{x}_n$, we have $$\|\mathbf{x}_{n+1}\| \leq \|A\|\|\mathbf{x}_n\| \leq (\rho(A) + \epsilon)\|\mathbf{x}_n\| = \frac{1}{2}(1+\rho(A))\|\mathbf{x}_n\|$$

Hence if $\rho(A) < 1$, we have $\|\mathbf{x}_n\| \rightarrow 0$. On the other hand, if $\rho(A) \geq 1$, then we can pick the eigenvector corresponding to $\lambda = \rho(A)$ and let it to be the initial guess to see it does not converge to $0$.

0
On

Suppose $J(\lambda) = \begin{bmatrix} \lambda & 1 & 0 & \cdots \\ 0 & \lambda & 1 & 0 &\cdots \\ \vdots\\ 0 & 0& \cdots & & \lambda\end{bmatrix}$, a Jordan block with eigenvalue $\lambda$.

It is not hard to see that $J(0)^i=0$, if $i \ge n$, where $n$ is the number of columns of $J(0)$.

Then, with $i \ge n$, we have $J(\lambda)^i = (J(0)+\lambda I)^i = \sum_{k=0}^{n-1} \binom{i}{k}\lambda^{i-k} J(0)^k$.

It is not hard to see that if $|\lambda| <1$, then $J(\lambda)^i \to 0$.

The result follows for a general $A$ if one takes the Jordan form of $A$.

It is easy to see that $A^k = 0$ for a finite $k$ iff $A$ is nilpotent.