Consider the system of autonomous differential equations (autonomous system of differential equations?)
$$x' = f(x,y)$$
$$y' = g(x,y)$$
where $x=x(t)$ and $y=y(t)$
Let $f$ and $g$ have first partial derivatives that exist and are continuous in $\mathbb R^3$.
Let $(u(t),v(t))$ be a maximal solution of the system such that it is defined on some interval $I$ whose endpoint contains $\alpha$, which I guess means that $t \in$ one of the following:
- $$\color{red}{(}\alpha, R\color{red}{)}$$
- $$\color{red}{(}L, \alpha\color{red}{)}$$
where
$\alpha \in \mathbb R$ (not $\color{red}{\overline{\mathbb R}}$?)
$R,L \in \overline{\mathbb R}$, $R > \alpha$ and $L < \alpha$
$\color{red}{(}, \color{red}{)}$ may, respectively, be $\color{red}{[}, \color{red}{]}$
It can be shown that
Theorem: $$\lim_{t \to \alpha} \sqrt{u(t)^2 + v(t)^2} = \infty$$
Using the above theorem, we must prove that
Proposition: If $f$ and $g$ are bounded, then every solution of the autonomous system is defined for $t \in \mathbb R$.
I have no idea how to start this, but I guess:
- what the proposition is saying is that if we include the assumption that $f$ and $g$ are bounded, then we can have $\alpha \in \color{red}{\overline{\mathbb R}}$ in the maximal solution described above.
It looks like we can't necessarily have $\alpha \in \color{red}{\overline{\mathbb R}}$ because if $f$ or $g$ isn't bounded, then we could have $x' = \infty$ or $y' = \infty$.
However, I guess that that won't happen if $f$ and $g$ are bounded. That's what we're trying to prove?
This is the 3D version of something that would have a conclusion like
$$\lim_{t \to \alpha} |u(t)| = \infty$$
where $u(t), t \in (\alpha, R)$ or $(L, \alpha)$ is a maximal solution of
$$x' = f(x)$$
where $x = x(t)$
Is that right?
Please suggest how I might prove this.
What I tried based on Julián Aguirre's answer:
Let $(u(t), v(t))$ be defined on some interval $I$ whose endpoint contains $\alpha$. Let $t, t_0 \in I$.
Observe that
$$u(t) = u(t_0)+\int_{t_0}^tf(u(s),v(s))\,ds$$
and
$$v(t) = v(t_0)+\int_{t_0}^tg(u(s),v(s))\,ds$$
Let $f$ and $g$ be bounded, respectively, by $M_f$ and $M_g$. Define $M := \max\{M_f, M_g\}$. Then, we have:
$$u(t)^2 = \left(u(t_0)+\int_{t_0}^tf(u(s),v(s))\,ds\right)^2$$
$$ = u(t_0)^2+\left(\int_{t_0}^tf(u(s),v(s))\,ds\right)^2 + 2\left(\int_{t_0}^tf(u(s),v(s))\right)$$
$$ \le u(t_0)^2+\left(\int_{t_0}^t M \,ds\right)^2 + 2\left(\int_{t_0}^t M \right)$$
$$ \le u(t_0)^2+(M(t-t_0))^2 + 2 M (t-t_0)$$
///ly for $v$ we have:
$$v(t)^2 \le v(t_0)^2+(M(t-t_0))^2 + 2 M (t-t_0)$$
Thus,
$$\sqrt{u(t)^2 + v(t)^2} \le \sqrt{u(t_0)^2+(M(t-t_0))^2 + 2 M (t-t_0) + v(t_0)^2+(M(t-t_0))^2 + 2 M (t-t_0)}$$
$$\to \lim_{t \to \alpha} \sqrt{u(t)^2 + v(t)^2} \le \lim_{t \to \alpha} \sqrt{u(t_0)^2+(M(t-t_0))^2 + 2 M (t-t_0) + v(t_0)^2+(M(t-t_0))^2 + 2 M (t-t_0)} = \sqrt{u(t_0)^2+(M(\alpha-t_0))^2 + 2 M (\alpha-t_0) + v(t_0)^2+(M(\alpha-t_0))^2 + 2 M (\alpha-t_0)} < \infty$$
I guess this means that we don't have to worry about $\sqrt{u(t)^2 + v(t)^2}$ blowing up, but how does this mean that $(u(t),v(t))$ is defined $\forall \ t \ \in \ \mathbb R$?
Might the proof of the theorem be helpful? The theorem is apparently proven in a book by someone named Coddington. I can't find anything like that in the Coddington books I've found online:
Possibly relevant questions:
Let $M$ be a bound of $f$ and $g$ and $t_0\in(L,\alpha)$. Then $$ |u(t)|=\Bigl|u(t_0)+\int_{t_0}^tf(u(s),v(s))\,ds\Bigr|\le|u(t_0)|+M\,(t-t_0), \quad t_0\le t<\alpha. $$ Similarly for $v$. This implies that $\sqrt{u(t)^2+v(t)^2}$ cannot go to $\infty$ as $t\to\alpha$. In view of the Theorem, $\alpha$ must be $\infty$.