A question concerning solutions of systems of linear ODE

177 Views Asked by At

I have a very stupid question concerning systems of linear differential equations with smooth coefficients. We will work in $\mathbb{R}^{n}$. Suppose we have some $k$-dimensional vector subspace $P \subseteq \mathbb{R}^{n}$ and its fixed basis $(v_{1},\dots,v_{k})$.

Next, suppose that for each $i \in \{1,\dots,k\}$ and each $t \in \mathbb{R}$, we have a vector $w_{i}(t) \in \mathbb{R}^{n}$, such that it depends smoothly on $t$, and it satisfies the "initial condition" $w_{i}(0) = v_{i}$. In other words $(w_{1}(0),\dots,w_{k}(0))$ is exactly our fixed basis of $P$.

Finally, we assume that one can find $k \times k$ smooth functions $f_{ij}(t)$, such that for each $i \in \{1,\dots,k\}$, $w_{i}$ satisfies the differential equation \begin{equation} \frac{d}{dt} w_{i}(t) = \sum_{j=1}^{k} f_{ij}(t) \cdot w_{j}(t) \end{equation} for all $t \in \mathbb{R}$. Now to the actual question. From all of the assumptions, these two claims should be deduced:

Claim 1: In paper by Lobry, page 580 on top, they claim that this implies that there exist some smooth functions $g_{ij}(t)$, such that there holds an equation \begin{equation} v_{i} = w_{i}(0) = \sum_{j=1}^{k} g_{ij}(t) \cdot w_{j}(t) \end{equation} for all $t \in \mathbb{R}$ and all $i \in \{1,\dots,k\}$.

Claim 2: In paper by Sussmann, page 184 on top, t hey claim that based on the fact that $(v_{1},\dots,v_{k})$ is a basis of the subspace $P$, then so is $(w_{1}(t),\dots,w_{k}(t))$ for every $t \in \mathbb{R}$.

Clearly, the statement in C2 immediately implies the statement in C1. I suppose both these claims should be provable easily using the theory of ordinary differential equations, but I am not able to do it myself.

Interestingly, the papers I am citing are known to be wrong - somewhere near those places - so maybe one of the statements (or even both) are simply wrong.

1

There are 1 best solutions below

0
On

This is a classical result in the theory of systems of ordinary differential equations known as Liouville's formula.

First of all, without loss of generality you can work in $\mathbb{R}^k$. Denote $\Phi(t)$ the matrix whose columns are $w_i(t)$. One can calculate $$ (\det \Phi)' (t) = \det \Phi (t)\, \mathrm{tr}\, F(t) $$ where $F$ is the matrix of coefficients of your system. This ODE can be solved explicitly (modulo integration) by $$ \det \Phi(t) = \det \Phi(0) \exp\left(\int_{0}^t\mathrm{tr}\, F(s)\, \mathrm{d}s\right). $$ In other words, vectors $w_i(t)$ are linearly independent if and only if $w_i(0)$ are linearly independent.

The last equation is sometimes written as $$ \det e^F = e^{\mathrm{tr}\, F}. $$ (modulo initial conditions)


edit

I don't see any elegant argument how to reduce from a subspace to full $\mathbb{R}^n$. One way is to use induction in nonstandard analysis. Another way is to fill in $n-k$ equations $w_i' = 0$ for $i=k+1,\ldots, n$ and then look at the structure of the exponential in the matrix solution $\Phi(t) = \Phi(0) \exp (\int_0^t F(t)).$