Problem of book ODE and dynamical systems gerald teschl

1.4k Views Asked by At

Problem 1. Consider again the exact model from the previous problem and write

$$ \ddot{r} = -\frac{\gamma M \epsilon^2}{(1 + \epsilon r)^2}, ~~~\epsilon = \frac{1}{R} $$

It can be shown that the solution $r(t) = r(t,\epsilon)$ to the above initial conditions is $C^{\infty}$ (with respect to both $t$ and $\epsilon$). Show that

$$ r(t) = h - g\left[1 - 2\frac{h}{R} \right]\frac{t^2}{2} + \mathcal{O}\left(\frac{1}{R^4}\right), ~~~ g = \frac{\gamma M}{R^2} $$

The initial condition reads $r(0)=h$ and $\dot{r}(0)=0$

(Hint: Insert $r(t,\epsilon) = r_0(t) + r_1(t)\epsilon + r_2(t)\epsilon^2 + r_3(t)\epsilon^3 + \mathcal{O}(\epsilon^4)$ into the differential equation and collect powers of $\epsilon$. Then solve the corresponding differential equations for $r_0(t)$, $r_1(t)$, $\cdots$ and note that the initial conditions follow from $r(0, \epsilon) = h$ respectively $\dot{r}(0, \epsilon) = 0$. A rigorous justification for this procedure will be given in Section 2.5.).

Remark: $\dot{r}$ and $\ddot{r}$ are derivatives of first And Second order. how to solve this problem following is hint? What does these ($r_0$, $r_1$, $r_2$, $r_3$) mean? the derivatives? This problem be in book book ODE and dynamical systems gerald teschl on introduction.

2

There are 2 best solutions below

7
On

The trick is (as the problem points out) that $r = r(t,\epsilon)$ is a $C^{\infty}$ function on $\epsilon$, so you can write it as a Taylor series around $\epsilon = 0$

\begin{eqnarray} r(t,\epsilon) &=& r(t,0) + \left.\frac{\partial r}{\partial \epsilon}\right|_{\epsilon=0} \epsilon + \frac{1}{2}\left.\frac{\partial^2 r}{\partial \epsilon^2}\right|_{\epsilon=0} \epsilon^2 + \cdots \\ &=& r_0(t) + r_1(t)\epsilon + r_2(t)\epsilon^2 + \cdots \tag{1} \end{eqnarray}

You can then calculate

\begin{eqnarray} \dot{r}(t,\epsilon) &=& \dot{r}_0(t) + \dot{r}_1(t)\epsilon + \dot{r}_2(t)\epsilon^2 + \cdots \\ \ddot{r}(t,\epsilon) &=& \ddot{r}_0(t) + \ddot{r}_1(t)\epsilon + \ddot{r}_2(t)\epsilon^2 + \cdots \tag{2} \end{eqnarray}

Similarly, you can write the RHS in terms of the series (1)

\begin{eqnarray} -\gamma M \frac{\epsilon^2}{(1 + \epsilon r)^2} &=& (1 + \epsilon r)^{-2} \\ &=& - \gamma M \epsilon^2 [1 - 2\epsilon r + \cdots] \\ &=& -\gamma M \epsilon^2 [1 - 2\epsilon r_0 - 2\epsilon^2 r_1 \cdots] \tag{3} \end{eqnarray}

Now, you have two polynomials in $\epsilon$ that are equal to each other, the only way this is true is if their coefficients are equal

\begin{eqnarray} \ddot{r}_0 &=& 0 \\ \ddot{r}_1 &=& 0 \\ \ddot{r}_2 &=& -\gamma M \epsilon^2 \\ &\vdots& \tag{4} \end{eqnarray}

I will leave the rest for you to complete

1
On

The hint tells us that the solution $r(t, \epsilon)$ depends on both $t$ (as solutions to differential equations are functions) and on $\epsilon$ (as changing $\epsilon$ changes the differential equation and thus the solution). Since we are thinking of $\epsilon$ as being small, we expand our solution $r(t,\epsilon)$ as a Taylor series in $\epsilon$ (this is the hint given):

$$r(t, \epsilon) = r_0(t) + r_1(t) \epsilon + r_2(t) \epsilon^2 + r_3(t) \epsilon^3 + \mathcal{O}(\epsilon^4).$$

For example, if $\epsilon =0$ we would be solving the differential equation $$\ddot{r}= - \frac{ \gamma M \cdot 0}{(1 + 0 \cdot r)^2}=0, $$ which is easily solved. The theory (which sounds like it will be developed later on) is that by nudging $\epsilon$ away from $0$, we change the differential equation and therefore the solution. However, the new solution shouldn't change too drastically, i.e., it should look like the solution to the $\epsilon = 0$ case has been perturbed.

Moving to the specifics of the problem and recalling that $\dot r$ and $\ddot r$ canonically denote time derivates, we have that \begin{align*} &\ddot r_0 (t) + \ddot r_1(t) \epsilon + \ddot r_2(t) \epsilon^2 + \ddot r_3(t) \epsilon^3 + \mathcal{O}(\epsilon^4) \\ &= \ddot r(t, \epsilon) \\ &= - \frac{ \gamma M \epsilon^2}{(1 + \epsilon r(t, \epsilon) )^2} \\ &= - \frac{ \gamma M \epsilon^2}{(1 + \epsilon (r_0(t) + r_1(t) \epsilon + r_2(t) \epsilon^2 + r_3(t) \epsilon^3 + \mathcal{O}(\epsilon^4)) )^2} \end{align*} and therefore \begin{align*} \left(\ddot r_0 (t) + \ddot r_1(t) \epsilon + \ddot r_2(t) \epsilon^2 + \ddot r_3(t) \epsilon^3 + \mathcal{O}(\epsilon^4) \right)\cdot \left( 1 + \epsilon r_0(t) + r_1(t) \epsilon^2 + r_2(t) \epsilon^3 + r_3(t) \epsilon^4 + \mathcal{O}(\epsilon^5)) \right)^2 &= - \gamma M \epsilon^2. \end{align*} The key insight is that we have a (infinite) polynomial in $\epsilon$ on both sides of the equation and that polynomials are equal if and only if all of their coefficients are equal. As such, we can bootstrap our way to a solution by solving for one pair of coefficients at a time.

Collecting terms that are constant w.r.t. $\epsilon$ gives $$\ddot r_0 (t) \cdot 1 = 0$$ which we can easily solve for $r_0(t)$. Now that we have $r_0(t)$, we collect terms that are linear in $\epsilon$: $$\ddot r_1(t) \epsilon + \ddot r_0(t) \cdot (2 \epsilon r_0(t)) = 0$$ or $$\ddot r_1(t) + \ddot r_0(t) \cdot (2 r_0(t)) = 0. $$ As we have previously determined what $r_0(t)$ is, we can now solve for $r_1(t)$. Having thus found $r_0(t)$ and $r_1(t)$, we can solve for $r_2(t)$ by considering the system generated by collecting $\epsilon^2$ terms and so forth.

TL;DR

The $r_0(t)$, $r_1(t)$, $r_2(t)$, $\dots$ are functions of $t$ that help us build an approximation of the solution $r(t,\epsilon)$. In some very loose sense, they are analogous to the $n^{th}$ order partial derivatives of $r(t, \epsilon)$ with respect to $\epsilon$.