If $y' + p(x) y = q(x)$, $q \to 0$, and $p(x) \geq a > 0$, then $y \to 0$

118 Views Asked by At

Given the first order equation

$$ y' + p(x)y=q(x) $$

where $p,q: \mathbb{R} \longrightarrow \mathbb{R} $ are continuous functions such that $p(x) \geq \alpha > 0, \forall x \in \mathbb{R}$ and $q(x) \longrightarrow 0$ if $x \longrightarrow +\infty$. Show that any solution of equation fulfills that

$$ \lim_{x \longrightarrow +\infty} y(x) = 0$$

2

There are 2 best solutions below

1
On BEST ANSWER

There are probably several ways to approach this question. I believe that there is a very simple way, but unfortunately I didn't immediately see it. I did see the following way, which has a simple idea, but which is annoying to actually get right. The idea is that one can describe $y$ through the standard integrating factor, and the average behavior of the solution $y$ is determined from the behavior of $q(x)$ multiplied by something related to the integrating factor. Finally, $q(x)$ is guaranteed to be small when $x$ is large, and the contribution coming from the integrating factor is guaranteed to be small when $x$ is small. Together, these should imply that $y$ is small.

Let us now actually give the solution.

Multiply through by $e^{\int p(x) dx}$. Then the differential equation is $$ y' e^{\int p(x) dx} + p(x) y e^{\int p(x) dx} = \Big( y e^{\int p(x) dx} \Big)' = q(x) e^{\int p(x) dx}.$$ Integrating and simplifying, you have that $$ y(x) = e^{-\int p(x) dx} \int_0^x q(t) e^{\int_0^t p(u) du} dt.$$ By the integral mean value theorem, there exists some $c \in (0, x)$ such that $$ \int_0^x q(t) e^{\int_0^t p(u) du} dt = q(c) e^{\int_0^c p(u) du},$$ and thus $$y(x) = e^{-\int_0^x p(t) dt} q(c) e^{\int_0^c p(u) du} = q(c) e^{-\int_c^x p(t) dt}$$ for some $c \in (0, x)$.

To show $y$ vanishes, choose some small $\epsilon > 0$ and we will show it is eventually less than $\epsilon$. As $q(x) \to 0$, there exists some $X$ such that for all $x > X$, we have $q(x) < \epsilon$. Further, as $q \to 0$ and $q$ is continuous, $q$ attains an absolute maximum $M$ and absolute minimum $m$. Without loss of generality, I will assume that $|M| > |m|$ and that $M > 0$ (the other cases are essentially the same, but this is notationally simpler).

As $p(x) \geq a > 0$, there exists some $Y$ such that for all $y > Y$, we have for any $A$ that $$ Me^{-\int_{A}^{A+y} p(t) dt} \leq Me^{-ay} \leq \epsilon.$$

Let $Z$ be the maximum of $X$ and $Y$. Then I claim that for $x > 2Z$, we have that $y(x) < \epsilon$. Indeed, if $c \in [0, Z]$, then $$y(x) = e^{-\int_0^x p(t) dt} q(c) e^{\int_0^c p(u) du} = q(c) e^{-\int_c^x p(t) dt} \leq M e^{-\int_Z^{2Z} p(t) dt} \leq \epsilon,$$ since $Z > Y$. And if $c \in (Z, 2Z]$, then $$y(x) = e^{-\int_0^x p(t) dt} q(c) e^{\int_0^c p(u) du} = q(c) e^{-\int_c^x p(t) dt} \leq q(c) \cdot 1 \leq \epsilon,$$ as $Z > X$.

As this is true for any $\epsilon$, we must have that $y \to 0$.

0
On

I'm not sure if it's simpler, but here's a proof using a comparison principle, which I always prefer over estimating integrals. Let's rewrite the ODE as

$$ y' = -py + q =: f(y,x). $$

We want to show that $y$ limits to zero, and so were interested in upper bounds on $y'$ when $y$ is positive, and lower bounds on $y'$ when $y$ is negative. We also want to take advantage of the fact that $q \to 0$, so let's start by letting $\varepsilon > 0$ and picking $x_0$ large enough that $|q(x)| < \varepsilon$ for all $x \ge x_0$. Then we have the bounds

$$ y' = f(y,x) \begin{cases} < -\alpha y + \varepsilon & y \ge 0 \\ > -\alpha y - \varepsilon & y < 0. \end{cases} $$ for $x \ge x_0$. It seems inutitive (and we will prove) that $y$ is bounded by solutions to the equations where these inequalities are replaced by inequalities, that is solutions $z_\pm$ to the ODEs $$ \begin{align*} z_\pm' = -\alpha z_\pm \pm \varepsilon =: g_\pm(z_\pm,x). \end{align*} $$ We can solve these ODEs very explicitly (for instance using the integrating factor method in David's solution), and find that, regardless of initial conditions, $z_\pm \to \pm \varepsilon/\alpha$. So if our intuition is right, the values of $y$ must be trapped between $\pm\varepsilon/\alpha$ as $x \to \infty$. More technically, $\liminf y \ge -\varepsilon$ and $\limsup y \le \varepsilon$. Since $\varepsilon > 0$ was arbitrary, sending $\varepsilon \to 0$ yields $\liminf y \ge 0$ and $\limsup y \le 0$ and hence $y \to 0$ and we are done!

So all we need to show is that an inequality $z_- < y < z_+$ really does hold for all $x \ge x_0$. Pick initial conditions $z_\pm(x_0)$ so that $$ z_-(x_0) < y(x_0) < z_+(x_0).$$ It's no trouble to also pick $z_-(x_0) < 0 < z_+(x_0)$, in which case we easily check (for instance from the explicit formulas) that $z_- < 0 < z_+$ for all $x \ge x_0$. Now for the fun part. Suppose for the sake of contradiction that $x_1$ is the smallest $x > x_0$ for which $y(x) = z_+(x) > 0$. Then we have $$ y'(x_1) = f(y(x_1),x_1) < g(y(x_1),x_1) = g(z_+(x_1),x_1) = z_+'(x_1). $$ But then we must have $y' > z_+'$ for all $x$ slightly smaller than $x_1$, which contradicts the definition of $x_1$. The argument with $z_-$ is similar.

This is a general principle: whenever $y' = f(y,x)$ and $z' = g(z,x)$ where $f(y,x) < g(y,x)$, then $y(x_0) \le z(x_0)$ implies that $y(x) < z(x)$ for all $x > x_0$. Provided you know enough about $z$, this lets you get an awful lot of information about $y$, even when you cannot solve the ODE for $y$ in terms of integrals.