Why do solutions tend to move towards the eigen-span in a differential equation?

202 Views Asked by At

Why does the long time behavior involve the eigen vectors? It's well understood, when a point is on the eigen span, it stays on the span. But any other point away from the span tend to move towards it.

Is this the case for any random differential equation or is it this a special property of Linear Ordinary Differential equations? If yes, why?

Specifically, in this video, there's an example at 11:00, with the population of rabbits and foxes. Any random population eventually settles on one of the eigen span.

enter image description here

Edit: I seem to have mistakenly assumed that they move towards eigen-span, but this is not true (points on the left, move towards the y-axis). I do not know how to put this question in an alternative way. Maybe "What's the relation between solutions and eigen-span?"

1

There are 1 best solutions below

2
On BEST ANSWER

Analytically

We have an $n$-dimensional linear time-invariant ordinary differential equation, $$ \dfrac{d}{dt}x(t) = Ax(t) \tag*{$\forall{t \in \mathbb{R}}$} $$

Suppose that the linear algebraic operator $A$ has a complete set of linearly independent eigenvectors, $$ Av_i = \lambda_i v_i \tag*{$\forall i \in \{1, 2, \ldots, n\}$}, $$

Then we can uniquely express the vector $x(t)$ on this eigenbasis using some time-dependent coefficients $q_i(t)$, $$ x(t) = \sum_{i=1}^n q_i(t) v_i \tag*{$\forall{t \in \mathbb{R}}$} $$

Substituting this into our original problem and invoking linearity, \begin{align} \dfrac{d}{dt}\sum_{i=1}^n q_i(t) v_i &= A \sum_{i=1}^n q_i(t) v_i \tag*{$\forall{t \in \mathbb{R}}$} \\ \sum_{i=1}^n \dfrac{d}{dt}q_i(t) v_i &= \sum_{i=1}^n \lambda_i q_i(t) v_i \tag*{$\forall{t \in \mathbb{R}}$} \\ \sum_{i=1}^n \Big{(}\dfrac{d}{dt}q_i(t) - \lambda_i q_i(t)\Big{)} v_i &= 0 \tag*{$\forall{t \in \mathbb{R}}$} \end{align}

Since the $v_i$ are linearly independent, the above implies $n$ decoupled scalar equations, $$ \dfrac{d}{dt}q_i(t) = \lambda_i q_i(t) \tag*{$\forall\ {t \in \mathbb{R}},\ i \in \{1, 2, \ldots, n\}$} $$

which are each differentially separable and can be solved by elementary integration to find, $$ q_i(t) = c_i e^{\lambda_i t} \tag*{$\forall\ {t \in \mathbb{R}},\ i \in \{1, 2, \ldots, n\}$} $$

for some constants $c_i$ determined by initial conditions. Thus the general solution is, $$ x(t) = \sum_{i=1}^n c_i e^{\lambda_i t} v_i \tag*{$\forall{t \in \mathbb{R}}$} $$

This analytically shows how the asymptotic behavior of the solution for any initial condition (choice of $c_i$) is related to the eigenvectors $v_i$. For example, suppose one eigenvalue $\lambda_1 > 0$ and all the others $\lambda_{ i\neq 1} < 0$. As $t \to \infty$ we see that $x(t)$ tends towards (is dominated by) the $v_1$ direction since $e^{\lambda_1 t} \to \infty$ while $e^{\lambda_{i\neq 1} t} \to 0$. Try considering other possibilities for the eigenvalues (zero, complex) to better understand this well known solution.

Intuitively

The video you linked makes it very clear how each eigenvector is an invariant subspace. That is, if $x(t_1)$ is an element of one of these subspaces, then $x(t)$ will be an element of the same subspace $\forall t$.

Now consider two other properties of this kind of benign differential equation that we will take for granted: the solutions are smooth and unambiguous. Informally, smooth meaning that any solution curve $x$ doesn't discontinuously jump from place to place, and unambiguous meaning that for each $x(t)$ there is only one value for "where to go next" $\dfrac{dx}{dt}(t)$.

Imagine a solution that intersects one of these invariant subspaces (eigenvectors). We know that from then on, the solution has to move along the eigenvector. But we also know that the soution can't have a discontinuous kink, nor can it have two distinct first-derivatives at the intersection point. Therefore, before intersecting the eigenvector, it needed to already be infinitesimally close to being parallel to the eigenvector. Thus, we see that the solutions flow nicely "along" the eigenvectors, gradually easing into them without ever disrespectfully crossing them. enter image description here

Generally

This is all for sure a property of linear time-invariant ordinary differential equations with a complete set of eigenvectors, as we have just proved. I will state without detail (else teach a whole linear systems course) that the eigen-properties of any linear differential equation (time-varying, infinite-dimensional, whatever) are extremely important to understanding the solutions.

However, for nonlinear equations, I'm not sure that the idea "eigenvector" makes much sense because the idea "vector" doesn't make much sense in a nonlinear context (people often say "vector" when they actually mean "tuple" regarding the domain of some nonlinear $f: \mathbb{R}^n \to \mathbb{R}^n$). As you can see, all the manipulations we went through relied heavily on linearity. Fortunately, linear systems are ubiquitously useful in application.