Why do eigenvalues solve difference equations?

1.5k Views Asked by At

I'm quite confused as to how this whole thing works and am struggling to make the connection between 1st order difference and 2nd order difference equations. Given a first or second order equation, I am told that:

$$x_n = C\lambda^n$$

for some constant C determined by initial conditions is a solution. Where does this come from? I understand how to use it to solve a system of first order or a second order equation but its inspiration eludes me. Any guidance would be helpful.

1

There are 1 best solutions below

0
On

The Linear Algebra approach is a bit involved, but it might make it a bit more clear where these solutions actually come from. It essentially allows you to construct the solution of any linear difference equation from scratch, and we see that solutions of the form $x_n=\lambda^n$ will often appear.

Suppose we have a single variable linear difference equation of order $k$, with $x_0,x_1,...,x_{k-1}$ given:

$$a_{n+k}=b_0x_n+b_1x_{n+1}+...+b_{n+k-1}x_{n+k-1}$$

We can rewrite this as 1st order system using a vector $\vec v_n$ which I'll define as:

$$\vec v_n=\begin{bmatrix}x_n\\x_{n+1}\\\vdots\\x_{n+k-1}\end{bmatrix}$$

If we want to write a recurrence relation for $\vec v_n$, we notice that the next vector $\vec v_{n+1}$ has the same values as $\vec v_n$, but shifted upward by 1. We can obtain the last entry by solving our system for $x_{n+k}$, and write the result as a martix.

$$\begin{bmatrix}x_{n+1}\\x_{n+2}\\\vdots\\x_{n+k}\end{bmatrix} =\begin{bmatrix} 0 & 1 & 0 & \dots & 0\\ 0 & 0 & 1 & \dots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \dots & 1 \\ b_0 & b_1 & b_2 & \dots & b_{n+k-1} \end{bmatrix} \begin{bmatrix}x_n\\x_{n+1}\\\vdots\\x_{n+k-1}\end{bmatrix}$$

We can refer to the matrix above as $A$. Notice that out initial condition is now just $\vec v_0$

$$\vec v_{n+1}=A\vec v_n$$

We can solve this under the assumption that $A$ is diagonalizable. If we want to consider general cases, we'll need to extend this process to generalized eigenvectors and Jordan Normal Forms. For now though, suppose $A=PDP^{-1}$, where $P$ is some invertible matrix and $D$ is a diagonal matrix whose values on the diagonal are the eigenvalues of $A$. We can make the change of variables $\vec u = P^{-1}\vec v$ to simplify.

$$P^{-1}\vec v_{n+1}=DP^{-1}\vec v_n$$

$$\vec u_{n+1}=D\vec u_n$$

Since $D$ is diagonal, this system is entirely decoupled. Let $u_n^i$ be the $i$th entry of $\vec u_n$.

$$u_{n+1}^1=\lambda_1u_n^1\\ u_{n+1}^2=\lambda_2u_n^2\\ \vdots\\ u_{n+1}^k=\lambda_ku_n^k$$

Where $\lambda_i$ are the eigenvalues of $A$. Each of these has a solution of the form:

$$u_n^1=c_1\lambda_1^n\\ u_n^2=c_2\lambda_2^n\\ \vdots\\ u_n^k=c_k\lambda_k^n$$

The components of $\vec v_n$, which include $x_n$, will be linear combinations of these exponential functions. In fact, any linear combination, including the single term $x_n=\lambda_i^n$ will be a solution for a particular choice of initial conditions. This is precisely why we can assume a solution of this form will work in systems of any order.

Of course, this ansatz only makes sense if we are dealing with linear systems, and furthermore, the matrix $A$ will not always be diagonalizable. That is why there are several special cases that also need to be considered; they correspond to other ways of decomposing non-diagonalizable matrices.