Perturbation theory with coupled nonlinear differential equations

111 Views Asked by At

I’ve got a problem with a set of differential equations for which I’m trying to find fixed points (or rather restrictions for the parameters).

The equations have the form

(1) $\frac{d}{dt} R(t) = -B sin(\theta(t))$

(2) $\frac{d}{dt} \theta(t) = - \omega - \frac{B}{R(t)} cos(\theta(t))$

If B is zero, the answer is obvious: $R(t) = const.$ and $\theta(t) = -\omega$. Now I’m looking for a solution for small B. For that I should be able to use the ansatz $\theta(t) = \Omega t + g cos (\Omega t + \phi)$ with g being small. Hence, the solution should be something growing in time with an additional weak nonlinearity. Now, I can obviously plug the ansatz into the equations, do a Taylor expansion of g around 0 and then only consider the terms up to first order in g. As I know that B is proportional to g and about the same size I also consider B when looking for orders in g. With that I get

(1) $\frac{d}{dt} R(t) = -B sin(\Omega t)$

(2) $\Omega + g\Omega sin(\Omega t + \phi) = - \omega - \frac{B}{R(t)} cos(\Omega t)$

Now my question is: I know that $ \Omega = \Omega_0 + g\Omega_1$ and it would not make any sense (physically) if not $R(t) = R_0(t) + g R_1(t)$. Can I simply plug these into my equations and then get equations for 0th and 1st order in g for both equations? So I eventually will get four equations? Like, I can obviously do it but I’m not sure whether this whole way of trying to solve it makes sense… Or are there better ways to do it?

Any help would be appreciated!

1

There are 1 best solutions below

0
On

Since you're looking at the effect of perturbing $B$ slightly away from zero, your ansatz is going to also want to depend on $B$, and presumably you want to do it in a way that will present almost-linear behaviour when $B$ is small. So you could, for example, consider:

$$\begin{eqnarray} R & = & R_0 + B R_1(t) \\ \theta & = & \theta_0 - \omega t + B \theta_1(t) \end{eqnarray}$$

(This is, in fact, equivalent to setting $R = R_0(t) + B R_1(t)$ and similar for $\theta$, because the part that doesn't involve $B$ at all will be equal to the solutions you got for the $B = 0$ case.)

When you put these into the DEs, you'll get some cancellation that seems to justify the choice, for example:

$$\begin{eqnarray} \frac{dR}{dt} & = & -B \sin \theta \\ \frac{d}{dt}(R_0 + B R_1(t)) & = & -B \sin(\theta_0 - \omega t + B \theta_1) \\ B \frac{dR_1}{dt} & = & -B \sin((\theta_0 - \omega t) + B \theta_1) \\ \frac{dR_1}{dt} & = & -\left(\sin(\theta_0 - \omega t) \cos (B \theta_1) + \cos(\theta_0 - \omega t) \sin (B \theta_1) \right) \end{eqnarray}$$

Now here, you probably want to use first order approximations and say that $\cos (B \theta_1) \approx 1$ and $\sin (B \theta_1) \approx B \theta_1$, although you could also take the zero-order $\sin (B \theta_1) \approx 0$ which will make things easier now but also reduces the power of the approximation.

With similar manipulation for the second equation, and taking first-order approximations for relevant functions (assuming that $|B| \ll |R_0|$), you'll get something like:

$$\begin{eqnarray} \frac{d \theta_1}{dt} & = & -\frac{\cos(\theta_0 - \omega t) \cos(B \theta_1) - \sin(\theta_0 - \omega t) \sin(B \theta_1)}{R_0 + B R_1} \\ & \approx & -\frac{1}{R_0} \cos(\theta_0 - \omega t) - \frac{B}{R_0} \sin(\theta_0 - \omega t) \theta_1 + \frac{B}{R_0} \cos(\theta_0 - \omega t) R_1 \end{eqnarray}$$

This gives you a pair of first-order DEs that are linear in $R_1$ and $\theta_1$, so you can solve them exactly and you should also be able to put some reasonable measure on what the error is (it should be $O(B^2)$ or better, I believe).