Is there by chance an analytic solution of the following non-linear ODE: $x'(t) = a\, x(t) + b\, x^3 (t)$?

156 Views Asked by At

I'm in fact interested in a PDE for which I try to get some intuition (roughly, I interpret my PDE as a function of time with value in a space of functions of space variables). Does someone by luck know a solution to the ODE in the title (wandering around this site I did see some incredible analytic solutions to some ODEs).

As for my line of thoughts so far, I see that $\enspace x:0< t\mapsto \frac{C}{\sqrt{t}}$ satisfies $x' = -\frac{1}{2}\, x^3$. The ODE being non-linear, I guess it is not very helpful to look at solutions of $x' = b x^3$ alone. I also don't think that Laplace or Fourier transfo will work. Series solution (haven't checked yet...)? My last idea is to use the trick of writing the equation in integral form and insert it within itself...

Follow up question: the next step is to consider a system of ODE of the form $$\dot{\mathbf{X}} = A\cdot \mathbf{X} + B\cdot \mathbf{X}^3 $$ where here $\mathbf{X}^3$ denotes a vector with $i^{\text{th}}$-components $x_i^3$. Let us assume also that $A$ is diagonalizable but not $B$ (or at least no in the same basis). The problem boils down for each component of $\mathbf{X}$ to solving $$ x_i'(t) = a\, x_i(t) + b\, x_i^3 (t) + f(t)$$

2

There are 2 best solutions below

0
On BEST ANSWER

$$x'(t) = a\, x(t) + b\, x^3 (t)$$ This is Bernoulli's differential equation. $$\dfrac {x'(t)}{x^3} = \dfrac a { x^2(t)} + b$$ $$-\dfrac {u'}{2}= au + b$$ Where $u=\dfrac 1 {x^2}$. The DE is now linear of first order.

0
On

Other ideas for the system of equations: for the one dimension case, the crux of the proof is the relation $\left(- \frac{1}{2\ x^2(t)} \right)'= \frac{x'(t)}{x^3(t)}$. In higher dimension, there is a priori no multiplication of vectors (although such an operation is just a linear map $\mu:V \otimes V \to V $ ). In fact, multiplication components by components does define a product for which the previous relation is satisfied and such a product can be recasted as multiplication of diagonal matrices after identification of vectors with diagonal matrices: ($V$ vector space with basis $\mathcal{B}:=(\mathbf{e}_i)_{i\in I}$, vectors are mapped to "multiplication" operators) $$\mathrm{Identif}_{\mathcal{B}}: \left\lbrace \begin{aligned} V &\to \mathrm{End}(V) \\ \mathbf{X} & \mapsto \mathrm{Diag}(\mathbf{X}):= \mathrm{Diag}(x_i, i\in I) \end{aligned}\right.\qquad \text{e.g. dimension 2}\quad \begin{pmatrix}x_1 \\ x_2 \end{pmatrix}\mapsto \begin{pmatrix}x_1 & 0 \\ 0 & x_2 \end{pmatrix}$$ The problem is now that the action of a matrix $A\in \mathrm{End}(V)$ on a vector $\mathbf{X}\in V$ cannot be written with a matrix of the same size on $\mathrm{Diag}(x_i) $, although it is still a linear transformation. (Linear map from a $n\times n$ matrix space to itself would in general be written with matrices of size $n^2\times n^2$). So let us introduce the notation $\mathcal{A}\big(\mathrm{Diag}(\mathbf{X})\big)= \mathrm{Diag}(A\cdot \mathbf{X}) $. The matrix equation can thus be rewritten $$\dot{\mathbf{X}} = A\cdot \mathbf{X} + B\cdot \mathbf{X}^3 \quad \Longleftrightarrow \quad \mathrm{Diag}(\dot{\mathbf{X}}) = \mathcal{A}\big(\mathrm{Diag}(\mathbf{X})\big) + \mathcal{B}\big(\mathrm{Diag}(\mathbf{X})^3\big) $$ The second equality is that of diagonal matrices. I thought about multiplying on the right by $\mathrm{Diag}(\mathbf{X})^{-3}$ but although $\mathcal{B}$ is linear, one does not have equality $$\mathcal{B}\big(\mathrm{Diag}(\mathbf{X})^3\big)\cdot \mathrm{Diag}(\mathbf{X})^{-3} \neq \mathcal{B}\big(\mathrm{Diag}(\mathbf{X})^3\cdot \mathrm{Diag}(\mathbf{X})^{-3}\big) = \mathcal{B}(\mathrm{Id})$$ unless the matrix $B$ is diagonal...


Finally it seems that $\mathcal{A}, \mathcal{B}$ can actually be written with matrices: let us stick to a 2-d example as below (\ref{G}). We want to write $$\begin{pmatrix}\dot{x_1} \\ \dot{x_2} \end{pmatrix} = \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{pmatrix} \cdot \begin{pmatrix}x_1 \\ x_2 \end{pmatrix} + \begin{pmatrix} b_{11} & b_{12}\\ b_{21} & b_{22} \end{pmatrix} \cdot \begin{pmatrix}x_1^3 \\ x_2^3 \end{pmatrix}$$ but with the vector replaced by diagonal matrices:

$$\begin{split} \begin{pmatrix}\dot{x_1} & 0 \\0 & \dot{x_2} \end{pmatrix} &= \mathcal{A} \left( \begin{pmatrix}x_1 & 0\\ 0 & x_2 \end{pmatrix} \right) + \mathcal{B} \left( \begin{pmatrix}x_1^3 & 0 \\ 0 & x_2^3 \end{pmatrix} \right) \\ &= \begin{pmatrix} a_{11} & 0 \\ 0 & a_{22} \end{pmatrix} \cdot \begin{pmatrix}x_1 & 0\\ 0 & x_2 \end{pmatrix} + \begin{pmatrix} 0 & a_{12}\\ a_{21} & 0 \end{pmatrix} \cdot \begin{pmatrix}x_1 & 0 \\ 0 & x_2 \end{pmatrix} \cdot \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \\ &\enspace + \begin{pmatrix} b_{11} & 0 \\ 0 & b_{22} \end{pmatrix} \cdot \begin{pmatrix}x_1^3 & 0\\ 0 & x_2^3 \end{pmatrix} + \begin{pmatrix} 0 & b_{12}\\ b_{21} & 0 \end{pmatrix} \cdot \begin{pmatrix}x_1^3 & 0 \\ 0 & x_2^3 \end{pmatrix} \cdot \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \\ &= \begin{pmatrix} a_{11} & 0 \\ 0 & a_{22} \end{pmatrix} \cdot \begin{pmatrix}x_1 & 0\\ 0 & x_2 \end{pmatrix} + \begin{pmatrix} a_{12} & 0\\ 0 & a_{21} \end{pmatrix} \cdot \begin{pmatrix}x_2 & 0 \\ 0 & x_1 \end{pmatrix} \\ &\enspace + \begin{pmatrix} b_{11} & 0 \\ 0 & b_{22} \end{pmatrix} \cdot \begin{pmatrix}x_1^3 & 0\\ 0 & x_2^3 \end{pmatrix} + \begin{pmatrix} b_{12} & 0\\ 0 & b_{21} \end{pmatrix} \cdot \begin{pmatrix}x_2^3 & 0 \\ 0 & x_1^3 \end{pmatrix} \end{split}$$


The equation can also be rewritten as follows (example in dimension $2$) $$ \begin{split} \begin{pmatrix}\dot{x_1} \\ \dot{x_2} \end{pmatrix} &= \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{pmatrix} \cdot \begin{pmatrix}x_1 \\ x_2 \end{pmatrix} + \begin{pmatrix} b_{11} & b_{12}\\ b_{21} & b_{22} \end{pmatrix} \cdot \begin{pmatrix} x_{1}^2 & 0 \\ 0 & x_{2}^2 \end{pmatrix} \cdot \begin{pmatrix}x_1 \\ x_2 \end{pmatrix} \\ &= \left[ \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{pmatrix} + \begin{pmatrix} b_{11}\, x_{1}^2 & b_{12}\, x_{2}^2 \\ b_{21}\, x_{1}^2 & b_{22}\, x_{2}^2 \end{pmatrix} \right] \cdot \begin{pmatrix}x_1 \\ x_2 \end{pmatrix}\\ &= \begin{pmatrix} a_{11}+ b_{11}\, x_{1}^2 & a_{12} + b_{12}\, x_{2}^2 \\ a_{21} + b_{21}\, x_{1}^2 & a_{22} + b_{22}\, x_{2}^2 \end{pmatrix} \cdot \begin{pmatrix}x_1 \\ x_2 \end{pmatrix} \end{split} \label{G}\tag{G}$$

A dead end: If solutions existed for all times the resolvant would define a one parameter group (of $2\times 2$ matrices in example (\ref{G})). I thought that such groups were known and of the form $e^{tH}$ where $H\in M_2(\mathbb{K})$ is the so-called generator. BUT this holds for groups of linear transformations, and our resolvant is looking at (\ref{G}) probably not a linear transformation...


My last idea was to use tensor products. Assume again for simplicity of notation that $\mathbf{X}$ lives in a 2 dimensional vector space $V$ with basis $\mathcal{B}:=(\mathbf{e}_1, \mathbf{e}_2)$. Define $$ M_{\mathcal{B}}: \left\lbrace \begin{aligned} V \otimes V \otimes V &\longrightarrow \quad V\\ \mathbf{X}\otimes \mathbf{Y}\otimes \mathbf{Z} &\longmapsto \begin{pmatrix} x_1 \cdot y_1 \cdot z_1 \\ x_2 \cdot y_2 \cdot z_2\end{pmatrix} \end{aligned} \right.$$ or by its action on basis vectors $M_{\mathcal{B}}(\mathbf{e}_i \otimes \mathbf{e}_j \otimes \mathbf{e}_k) = \mathbf{0}$ as soon as one of the indice is different from the other and $M_{\mathcal{B}}(\mathbf{e}_i \otimes \mathbf{e}_i\otimes \mathbf{e}_i)= \mathbf{e}_i $. Introducing $\mathcal{u}=\mathbf{e}_1 + \mathbf{e}_2$ the equation reads $$\dot{\mathbf{X}} = M_{\mathcal{B}}\big( (A\cdot \mathbf{X})\otimes \mathbf{u} \otimes \mathbf{u} \big) + B\cdot M_{\mathcal{B}}\big( \mathbf{X}\otimes \mathbf{X} \otimes \mathbf{X}\big) \label{T}\tag{T} $$

Let $\displaystyle A= \bigoplus_{k=0}^{+\infty} V^{\otimes k} = \mathbb{K} \oplus V \oplus \big(V \otimes V \big) \oplus \cdots $ be the freely generated associative algebra built out of $V$, a little bit like the Fock space in quantum mechanics (without the Hilbert space structure...) and $\Delta: V \to A,\ \mathbf{X} \mapsto \epsilon(\mathbf{X}) \oplus \mathbf{X} \oplus \big(\mathbf{X}\otimes \mathbf{X} \big) \oplus \cdots$ the "diagonal map" ($\epsilon$ some linear form; $\Delta$ is not the usual linear injection of $V \hookrightarrow A,\, \mathbf{X} \mapsto \mathbf{X}$). Let us indicate with indices $(n,p)$ a linear map from a "homogeneous" subspace $L^{(n,p)}: V^{\otimes n} \to V^{\otimes p}$, e.g. $M_{\mathcal{B}}^{(3,1)}: V^{\otimes 3} \to V $. Introducing also the projections $\Pi_k: A \to V^{\otimes k}$, (\ref{T}) can be written:

$$\begin{split} \dot{\mathbf{X}} &= A^{(1,1)}(\mathbf{X})+ B^{(1,1)} \circ M_{\mathcal{B}}^{(3,1)}\big( \mathbf{X}\otimes \mathbf{X} \otimes \mathbf{X}\big) \\ &= \left( A^{(1,1)}\circ \Pi_1 + B^{(1,1)} \circ M_{\mathcal{B}}^{(3,1)}\circ \Pi_3 \right) \Delta(\mathbf{X}) \end{split} \label{T2}\tag{T2} $$ Can one find a solution of the form $ \left( e^{\left(A^{(1,1)} + B^{(1,1)} \circ M_{\mathcal{B}}^{(3,1)} \right) t} \Delta(\mathbf{X})\right)$? We have manage to get a linear operator withing the exponential but $\Delta(\mathbf{X})$ is not linear, and it seems that this doesn't have the correct behavior under derivation...