how to uncouple differential equations

116 Views Asked by At

Consider the following equations: $$ \dot{u}_i(t)=-\mu_i u_i(t) + \sum_{j\neq i} J_{ij}u_j(t) \quad \text{with }1\leq i\leq N $$ although this is a linear system this is a real pain to solve when $N$ becomes large. Is there a smart change of variables to uncouple these equations and solve them independently?

My attempt:

Assume $J$ has a complete set of orthogonal eigenvectors $\{\vec{x}^{(k)}\}$, then we can write $\vec{u}(t)=\sum_k \sigma_k(t)\vec{x}^{(k)}$ for some coefficients $\sigma_k(t)$. Thus:

\begin{equation} \sum_k \dot{\sigma}_k(t) x^{(k)}_i = -\mu_i\sum_k \sigma_k(t) x^{(k)}_i + \sum_{j,k,l}\lambda_l \sigma_k(t) x^{(l)}_i y^{(l)}_j x^{(k)}_j \end{equation} where we use: \begin{equation} J_{ij}=\sum_l^N \lambda_l x_i^{(l)}y_j^{(l)} \end{equation} with $\mathbf{x}$ and $\mathbf{y}$ the right and left normalised eigenvectors: \begin{equation} \sum_i x_i^{(m)}y_i^{(n)}=\delta_{mn} \end{equation} Thus: \begin{equation} \sum_k \dot{\sigma}_k(t) x^{(k)}_i = -\mu_i\sum_k \sigma_k(t) x^{(k)}_i + \sum_{k,l}\lambda_l \sigma_k(t) x^{(l)}_i \delta_{kl} \end{equation} \begin{equation} \sum_k \dot{\sigma}_k(t) x^{(k)}_i = -\mu_i\sum_k \sigma_k(t) x^{(k)}_i + \sum_{k}\lambda_k \sigma_k(t) x^{(k)}_i \end{equation} \begin{equation} \boxed{\sum_k \dot{\sigma}_k(t) x^{(k)}_i = \sum_k (\lambda_k-\mu_i) \sigma_k(t) x^{(k)}_i} \end{equation}

Trying to uncouple the equations, by multiplying by the left eigenvector and summing over $i$: \begin{equation} \sum_{k,i} \dot{\sigma}_k(t) x^{(k)}_i y^{(l)}_i = \sum_{k,i} (\lambda_k-\mu_i) \sigma_k(t) x^{(k)}_i y^{(l)}_i \end{equation} \begin{equation} \dot{\sigma}_l(t) = \sum_{k,i} (\lambda_k-\mu_i) \sigma_k(t) x^{(k)}_i y^{(l)}_i \end{equation} \begin{equation} \dot{\sigma}_l(t) = \sum_{i,k} \lambda_k\sigma_k(t) x^{(k)}_i y^{(l)}_i-\sum_{k,i}\mu_i \sigma_k(t) x^{(k)}_i y^{(l)}_i \end{equation} \begin{equation} \dot{\sigma}_l(t) = \sum_{k} \lambda_k\sigma_k(t) \delta_{kl}-\sum_{k,i}\mu_i \sigma_k(t) x^{(k)}_i y^{(l)}_i \end{equation} \begin{equation} \dot{\sigma}_l(t) = \lambda_l\sigma_l(t) -\sum_{k,i}\mu_i \sigma_k(t) x^{(k)}_i y^{(l)}_i \end{equation}

Which is exactly my starting point, thus zero progress...

Any hints is really appreciated! thank you

1

There are 1 best solutions below

0
On BEST ANSWER

I wouldn't call it zero progress. For example, if all the $\mu_i$ are equal to some $\mu$ using the relation $\sum_i x_i^{(m)}y_i^{(n)}=\delta_{mn}$ you obtain \begin{equation} \dot{\sigma}_l(t) = \lambda_l\sigma_l(t) -\sum_{k,i}\mu \sigma_k(t) x^{(k)}_i y^{(l)}_i = \lambda_l\sigma_l(t) -\sum_{k} \sigma_k(t) \underbrace{\sum_i \mu x^{(k)}_i y^{(l)}_i}_{\mu \delta_{kl}} \end{equation} i.e \begin{equation} \dot{\sigma}_l(t) = \lambda_l\sigma_l(t) -\mu \sigma_l(t). \end{equation}


To see what happened, we can rewrite your computation in matrix form. With $$J= \begin{pmatrix} 0 & J_{12}& J_{13} & \ldots & J_{1n}\\ J_{21} & 0 & J_{23} & \ldots & J_{2n} \\ J_{31} & J_{32} &0& \ldots & J_{2n} \\ \vdots & & & \ddots& \vdots \\ J_{n1} & J_{n2} &J_{n3}&\ldots &0 \end{pmatrix}$$ and $$D= \begin{pmatrix} \mu_1 & 0& 0 & \ldots & 0\\ 0 & \mu_2 & 0 & \ldots & 0 \\ 0 & 0&\mu_3& \ldots & 0 \\ \vdots & & & \ddots& \vdots \\ 0 & 0 &0&\ldots & \mu_n \end{pmatrix}$$ the system writes as $$\frac{d}{dt}\vec{u}(t) = (-D+J)\vec{u}(t).$$

Your computation was equivalent to the diagonalization of $J$, i.e choose a $P$ such that $$P J P^{-1}= \begin{pmatrix} \lambda_1 & & &\\ & \ddots& \\ & & \lambda_n \end{pmatrix}$$ in order to write $$\frac{d}{dt}(P\vec{u})(t)=P\frac{d}{dt}\vec{u}(t) = P(-D+J)P^{-1} P\vec{u}(t)=(-PDP^{-1}+PJP^{-1}) (P\vec{u}(t)).$$ With $\vec{\sigma} = P \vec{u}$ this leads to your last equation: $$\left(\frac{d}{dt} \vec{\sigma}(t) \right)_l =\left(P DP^{-1} \vec{\sigma} \right)_l-\left(P JP^{-1} \vec{\sigma} \right)_l = \sum_k \underbrace{\left( P D P^{-1}\right)_{lk}}_{\sum_i \mu_i x_i^{(k)} y_i^{(l)}} \sigma_k-\lambda_l \sigma_l. $$

So what you gained by simplifying $J$ was lost by "complexifying" $D$, as a basis which diagonalize $J$ do not necessarily diagonalize $D$ (this is what happened for the case $D=\mu Id$).


We see, as Charles Hudgins pointed out in the comments, that a best method would be to diagonalize directly the matrix $-D+J$. If it were possible, i.e with $Q$ such that $$Q(-D+J)Q^{-1} = \begin{pmatrix} \eta_1 & & &\\ & \ddots& \\ & & \eta_n \end{pmatrix}$$ the equation is simply $$ \frac{d}{dt}(Q \vec{u})_l(t) = \mu_l (Q \vec{u})_l(t).$$

Unfortunately, not all matrices can be diagonalized. In $\mathbb{C}$ we can at least obtain a triangular system.

But the best method, as written in the comments, is to use the Jordan normal form for easy computation of the exponential of the matrix.