I am having a difficult time verifying the following theorem, and hope that someone can lend me a hand.
Holmes, in his book Introduction to Perturbation Methods (Second Edition) states:
"Theorem 1.4: Assume $f(x,\epsilon)$, $\phi(x,\epsilon)$ and $\phi_0(\epsilon)$ are continuous for $ a \leq x \leq b$ and $0 < \epsilon < \epsilon_1$.
(a) If $f \sim \phi$ for $a \leq x \leq b$, and if $|\phi(x,\epsilon)|$ is monotonically decreasing in $\epsilon$, then this asymptotic approximation is uniformly valid for $a \leq x \leq b$."
part (b) is not relevant to this question
When he writes $f\sim\phi$, he implicitly means as $\epsilon \downarrow 0$. Furthermore this means that (at any fixed $x_0 \in [a,b]$) given a $\delta >0$ one can find an $\epsilon_0 > 0$ (generically dependent on $x_0$) such that
$$ 0 < \epsilon < \epsilon_0 \Rightarrow \ |f(x_0,\epsilon) - \phi(x_0,\epsilon) | < \delta |\phi(x_0,\epsilon)|$$
Part (a) of the theorem says that if $|\phi(x,\epsilon)|$ is monotonically decreasing with $\epsilon$ (for all $x \in [a,b]$), then one can find an $\epsilon_*$, independent of $x$, such that the above inequality holds whenever $0<\epsilon<\epsilon_*$ (i.e. the asymptotic approximation is uniform on $[a,b]$).
I've tried showing that $Min\{ \epsilon_0 \}_{x_0 \in [a,b]} > 0$ but keep having a difficult time relating what happens at different points $x$ in a useful way. Any ideas?
Take $x_1 , x_2 \in [a,b]$, then given $\delta>0$ you know that there exist $\epsilon_0(x)$ such that \begin{eqnarray} |f(x_1,\epsilon) - \phi(x_1,\epsilon)| \leq \delta |\phi(x_1,\epsilon)| \quad \text{for} \quad 0 < \epsilon < \epsilon_0(x_1), \\ |f(x_2,\epsilon) - \phi(x_2,\epsilon)| \leq \delta |\phi(x_2,\epsilon)| \quad \text{for} \quad 0 < \epsilon < \epsilon_0(x_2). \end{eqnarray} Edited: I understand the monotonicity of $\phi$ to be as follows: \begin{equation} \phi(x,\epsilon_1) \leq \phi(x,\epsilon_2) \quad\text{if}\quad \epsilon_1 < \epsilon_2, \end{equation} so $\phi$ decreases as a function of $\epsilon$ as $\epsilon$ decreases -- which is rather the opposite of what 'monotonically decreasing in $\epsilon$' usually means.
The monotonicity of $\phi$ in $\epsilon$ can now be used in two ways: first, for every $x_i$, you can estimate $|\phi(x_i,\epsilon)|$ by its value at the rightmost point of the $\epsilon$-interval: \begin{equation} |\phi(x_i,\epsilon)| \leq |\phi(x_i,\epsilon_0(x_i))|. \end{equation} Second, without loss of generality, you can assume that $\epsilon_0(x_2) > \epsilon_0(x_1)$, such that $(0,\epsilon_0(x_1)) \subset (0,\epsilon_0(x_2))$. Since $\phi$ is monotonically decreasing in $\epsilon$, you can therefore estimate \begin{equation} |\phi(x_1,\epsilon_0(x_1))| \leq |\phi(x_1,\epsilon_0(x_2))|. \end{equation}
Edited: Therefore, to find a uniform upper bound for the $\epsilon$-interval, you should look at \begin{equation} \min_{a\leq x \leq b} \epsilon_0(x). \end{equation}