Need help with simple system of differential equations

171 Views Asked by At

thanks to your help I advanced in computing differential equations, but now I encountered another problem I need help with - this time it is a system of differential equations:

$$x_1'=-x_2$$ $$x_2'=x_1$$

I know that the answer should contain trigonometric functions, (sine and cosine) but I have no idea how to start. I tried to divide first equation/second equation and I got something like:

$$\frac{x_1'}{x_2'}=-\frac{x2}{x1}$$

Then I rewrited x1' as $$\frac{dx1}{dt}$$ and did the same with x2. I got rid of dt this way and got a: $$x_1dx_1=-x_2dx_2$$ Which lead me to result: $$x_1=\sqrt(const-x_1^2)$$ After inserting x1 to the $$x_2'=x_1$$ equation I got some results, but neither of them contains sine or cosine. Could you point me what am I doing wrong?

3

There are 3 best solutions below

3
On BEST ANSWER

One approach (other than just guessing) is to note that \begin{align*} x_1'' &= -x_2' \\ &= -x_1, \end{align*} so $x_1$ satisfies the ODE \begin{equation} x_1'' + x_1 = 0. \end{equation} This can be solved using standard methods for linear second order ODEs with constant coefficients.

Another approach, using linear algebra, is to work directly with the first order system \begin{equation} x'(t) = \underbrace{\begin{bmatrix} 0 & -1 \\1 & 0 \end{bmatrix}}_{A} x(t). \quad (\spadesuit) \end{equation} The eigenvalues of the coefficient matrix $A$ are $\lambda_1 = i, \lambda_2 = -i$. Finding corresponding eigenvectors $v_1$ and $v_2$ will yield the solutions \begin{align} u_1(t) &= e^{\lambda_1 t} v_1, \\ u_2(t) &= e^{\lambda_2 t} v_2. \end{align} Every solution to ($\spadesuit$) is a linear combination of $u_1$ and $u_2$: \begin{equation} x(t) = c_1 u_1(t) + c_2 u_2(t). \end{equation}

4
On

I got this,

y' = -x and x' = y

y = x' so y' = x''

x'' = -x

x = cos(t) ; is one solution, x=Acos(t+k) represents solution set

but ACos(t+k) = A(ei(t+k) + e-i(t+k))/2

which = 0.5*Aekeit+0.5*Ae-ke-it

which is the same form as littleO's solution

0
On

OK, I'll pitch two solution methods at y'all, one based on linear algebra and one, surprisingly enough, somewhat akin to our OP newuser's exploratory attempt centered around the derived equation

$\dfrac{\dot x_1}{\dot x_2} = -\dfrac{x_2}{x_1}. \tag{1}$

Note that I prefer the use of the "$\dot y$" notation over the "$y'$" notation for derivatives whenever possible, as I shall continue to do throughout this little exposition. In any event, the given system

$\dot x_1 = -x_2, \tag{2}$

$\dot x_2 = x_1, \tag{3}$

does indeed give rise to (1) at least in regions where $\dot x_ 2 \ne 0 \ne x_1$; I shall return to this topic momentarily, but first let me address things from the "linear algrbra" point of view. Setting

$\vec r(t) = \begin{pmatrix} x_1(t) \\ x_2(t) \end{pmatrix}, \tag{4}$

and

$J = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end {bmatrix}, \tag{5}$

we see that

$J^2 = -I \tag{6}$

and that (2)-(3) may be written

$\dot{\vec r}(t) = J\vec r(t). \tag{7}$

It follows from (7) that, if the initial condition at time $t_0$ is

$\vec r(t_0) = \begin{pmatrix} x_1(t_0) \\ x_2(t_0) \end{pmatrix}, \tag{8}$

then the solution may be written as

$\vec r(t) = e^{J(t - t_0)}\vec r(t_0); \tag{9}$

here we have that

$e^{J(t - t_0)} = I + (t - t_0)J + \dfrac{1}{2}(t -t_0)^2J^2 + \ldots + \dfrac{1}{n!}(t - t_0)^nJ^n + \ldots$ $= \sum_0^\infty \dfrac{1}{n!}(t - t_0)^n J^n, \tag{10}$

just as for scalars $a$ we have

$e^{at} = 1 + at + \dfrac{1}{2}a^2 t^2 + \ldots + \dfrac{1}{n!}a^n t^n + \ldots = \sum_0^\infty \dfrac{1}{n!} a^n t^n. \tag{11}$

Just as if follows by term by term differentiation of (11) that

$\dfrac{d}{dt} e^{at} = a e^{at}, \tag{12}$

so we see by term-by-term differentiation of (10) that

$\dfrac{d}{dt}e^{J(t - t_0)} = Je^{J(t - t_0)}, \tag{13}$

which is sufficient to prove that (9) solves (7), since we have

$\dot{\vec r}(t) = \dfrac{d}{dt}(e^{J(t - t_0)})\vec r(t_0) = Je^{J(t - t_0)}\vec r(t_0) = J\vec r(t). \tag{14}$

We next examine the specific form of the matrix exponential (10). Since $J^2 = -I$, just as $i^2 = -1$, a term-by-term comparison of (10) and (11), taking $a = i$, reveals that just as the terms of (11) contaning $i$ group to $i\sin(t -t_0)$, so the terms of (10) containing $J$ group to $(\sin(t -t_0))J$; and just as the terms of (11) which don't contain $i$ group to $\cos(t -t_0)$, so the terms of (10) which don't contain $J$ group to $(\cos(t - t_0))J$, so we may conclude that just as

$e^{i(t - t_0)} = \cos(t - t_0) + i\sin(t - t_0), \tag{15}$

we also must have

$e^{J(t - t_0)} = (\cos(t - t_0))I + (\sin (t - t_0)) J; \tag{16}$

a more complete exposition of (16) and related equations may be found here.

When the the matrix equation (16) is written out explicitly we see that

$e^{J(t - t_0)} = \begin{bmatrix} \cos(t - t_0) & -\sin (t - t_0) \\ \sin (t - t_0) & \cos(t - t_0) \end{bmatrix}, \tag{17}$

and it thus follows from (4), (8)-(9) and (17) that

$x_1(t) = x_1(t_0) \cos(t - t_0) - x_2(t_0) \sin(t -t_0), \tag{18}$

$x_2(t) = x_1(t_0) \sin(t - t_0) + x_2(t_0) \cos(t -t_0). \tag{19}$

It should perhaps be observed, in the light of the above comments by newuser and Sam, that in general the formulas (18), (19) will together contain both $\cos$ and $\sin$ terms. However, with

$r = \sqrt{x_1^2(t_0) + x_2^2(t_0)} \tag{20}$

we may also write

$x_1(t) = r(\dfrac{x_1(t_0)}{r} \cos (t - t_0) -\dfrac{x_2(t_0)}{r} \sin(t - t_0)) \tag{21}$

$x_2(t) = r(\dfrac{x_1(t_0)}{r} \sin(t - t_0) + \dfrac{x_2(t_0)}{r} \cos(t - t_0)); \tag{22}$

furthermore, since

$(\dfrac{x_1(t_0)}{r})^2 + (\dfrac{x_2(t_0)}{r})^2 = 1 \tag{23}$

there exists a constant $\phi \in [0, 2\pi)$ with

$\cos \phi = \dfrac{x_1(t_0)}{r}, \; \sin \phi = \dfrac{x_2(t_0)}{r}; \tag{24}$

then (21), (22) may be written

$x_1(t) = r \cos((t - t_0) + \phi) \tag{25}$

$x_2(t) = r \sin ((t - t_0) + \phi). \tag{26}$

We thus see that, with appropriate choice of the phase angle $\phi$, both $x_1(t)$ and $x_2(t)$ may be written as pure $\cos$ and $\sin$ functions with no admixture of the two. We also note that the matrix $e^{J(t - t_0)}$ appearing in (9) is orthogonal, that is

$(e^{J(t - t_0)})^T = \begin{bmatrix} \cos(t - t_0) & -\sin (t - t_0) \\ \sin (t - t_0) & \cos(t - t_0) \end{bmatrix}^T$ $= \begin{bmatrix} \cos(t -t_0) & \sin (t - t_0) \\-\sin (t - t_0) & \cos(t - t_0) \end{bmatrix} = e^{-J(t - t_0)} = (e^{J(t - t_0)})^{-1}, \tag{27}$

as may readily be verified by direct evaluation of the matrix product $(e^{J(t - t_0)})^T(e^{J(t -t_0)}) = I$. This in turn implies, as is well-known, that the magnitude of $\vec r(t)$ is constant, as may also be easily seen by computing $\Vert \vec r(t) \Vert^2 = x_1^2(t) + x_2^2(t)$; the calculations are simple, if a tad long-winded. Thus the motion of $\vec r(t)$ is circular.

Having solved (2)-(3) with the aid of matrix exponentials, what I have termed the "linear algebra" approach, I now turn to the second method of analyzing this system which I mentioned in the beginning of this post. This second treatment is in many ways similar in spirit to the attempt our OP newuser presented in his question.

First of all I think worthwhile to point out that one can get "rid of $dt$" through perfectly classical means that in no way refer to infinitesimals. Turning once again to equation (1) and the conditions $\dot x_2 \ne 0 \ne x_1$, we note that as long as $\dot x_2 \ne 0$, we may infer from the inverse function theorem that we may express $t$ as a function $t(x_2)$ of $x_2$ and that furthermore

$\dfrac{1}{\dot x_2(t)} = \dfrac{dt(x_2)}{dx_2}. \tag{28}$

We conclude from (28) via the chain rule that, writing $x_1(t) = x_1(t(x_2))$,

$\dfrac{dx_1(t(x_2))}{dx_2} = \dot x_1(t) \dfrac{dt(x_2)}{dx_2} = \dfrac{\dot x_1(t)}{\dot x_2(t)} = -\dfrac{x_2}{x_1}, \tag{29}$

which of course leads directly to

$x_1 \dfrac{dx_1}{dx_2} = - x_2, \tag{30}$

a form of (2)-(3) in which $t$ does not directly appear; we have rid ourselves of $t$ without introducing the concept of infinitesimals.

Having said these things, we further observe that (2), (3) imply

$x_1 \dot x_1 = -x_1 x_2 \tag{31}$

$x_2 \dot x_2 = x_1 x_2; \tag{32}$

adding these equations we see, after some minor algebraic mechanics, that

$\dfrac{d(x_1^2 + x_2^2)}{dt} = 2(x_1 \dot x_1 + x_2 \dot x_2) = 0, \tag{33}$

implying that $x_1^2 + x_2^2$ is conserved along the trajectories of (2), (3); hence, such integral curves, if non-trivial, must be contained in the circles $x_1^2 + x_2^2 = C^2 > 0$ a constant. Then

$\dfrac{x_1^2(t)}{C_2} + \dfrac{x_2^2(t)}{C^2} =1, \tag{34}$

from which we may conclude that

$x_1(t) = C \cos \theta(t), \tag{35}$

$x_2(t) = C\sin \theta(t) \tag{36}$

for some function $\theta(t)$ of $t$. The implicit function theorem may now be invoked to demonstrate that $\theta(t)$ is differentiable: setting $g(t, \theta) = x_1(t) - C\cos \theta$, we see that $\partial g / \partial \theta = C\sin \theta \ne 0$ provided $\theta \ne n\pi$, $n \in \Bbb Z$; thus the equation $0 = g(t, \theta) = x_1(t) - C\cos \theta$ defines a differentiable function $\theta(t)$ with $0 = g(t, \theta(t)) = x_1(t) - \cos \theta(t)$; in the vicinity of $n\pi$, we may use (36) to establish the differentiability of $\theta(t)$ is a similar fashion. Once we rest assured that $\theta(t)$ is differentiable, we may write

$C \dot \theta(t) \cos \theta(t) = \dot x_2(t) = x_1(t) = C \cos \theta(t), \tag{37}$

which implies

$\dot \theta (t) = 1, \tag{38}$

immediately yielding the solution

$\theta(t) - \theta(t_0) = t - t_0 \tag{39}$

or

$\theta(t) = t - t_0 + \theta(t_0), \tag{40}$

so that

$x_1(t) = C\cos((t - t_0) + \theta(t_0)), \tag{41}$

$x_2(t) = C\sin((t - t_0) + \theta(t_0)); \tag{42}$

we see that (41), (42) agree with (25), (26) via a renaming of constants $C = r$, $\theta(t_0) = \phi$. For more information on similar technique applied in a slightly different context, see my answer to this question.

One equation, two solutions; would that things were always this easy! I'm more used to two equations with no solutions!

Hope this helps. Cheerio,

and as always,

Fiat Lux!!!