"Theorem 12" in these notes states the following (verbatim):
Let $f:[a,b]\to\mathbb{R}$ be continuous and let $\epsilon>0$. For $x\in[a,b]$, let $$\Delta(x)=\sup\left\{\delta\,\,|\,\,\text{for all}\,y\in[a,b]\,\text{with}\,|x-y|<\delta,\,|f(x)-f(y)|<\epsilon\right\}$$ Then $\Delta$ is a continuous function of $x$.
In other words (this is my paraphrase of the imprecise statement about the supremum), $\Delta(x)$ is the largest $\delta$ we can pick at $x$ that keeps the variation of $f$ within $\epsilon$. (Of course, for a specified $x$, the set of $\delta$'s is nonempty because $f$ is continuous.)
Following the statement of the theorem, we're invited to use it to prove that continuous functions on closed and bounded intervals are uniformly continuous.
But I think this theorem is false. Take $f(x)=\sqrt{x}$ on $[0,1]$ and $\epsilon=0.5$. I claim
$$\Delta(x)=\begin{cases}\sqrt{x}-0.25&x>0.25\\\sqrt{x}+0.25&0\leq x\leq0.25\end{cases}$$
which has a jump discontinuity at $x=0.25$. This follows from a straightforward calculation I carried out for arbitrary $\epsilon$, which gives
$$\Delta(x)=\begin{cases}2\epsilon\sqrt{x}-\epsilon^2&x>\epsilon^2\\2\epsilon\sqrt{x}+\epsilon^2&0\leq x\leq\epsilon^2\end{cases}$$
with a jump discontinuity at $x=\epsilon^2$. This contradicts the quoted theorem.
(As a side note, this computation gives a very nice brute-force way to discover that choosing $\delta=\epsilon^2$ suffices for the uniform continuity of $\sqrt{x}$ on $[0,\infty)$. Indeed it shows more: this choice of $\delta$ is the largest we can make, because it is the infimum of $\Delta$. I wish I'd done this computation in college.)
I have two questions:
- Am I missing something, or does my counterexample show this claim is false? I've reworked the calculation and don't believe I'm wrong, but I could spell out the details if someone asks.
- If the claim is false, can it be repaired to do what the instructor wanted to do with it?
(Please note that with question 2 I'm not asking for any old proof of the special case of the Heine-Cantor theorem that says continuous functions on closed and bounded intervals are uniformly continuous. I know the proof of the general result in arbitrary metric spaces. I'm asking: what theorem could the instructor possibly have had in mind instead of Theorem 12, as stated?)
ADDED: a sketch of the calculation of $\Delta(x)$ for $\sqrt{x}$ on $[0,\infty)$ for arbitrary $\epsilon$.
Break the work into cases.
Case one: $\sqrt{x}>\epsilon$, i.e. $x>\epsilon^2$. The inverse image of the interval $(\sqrt{x}-\epsilon,\sqrt{x}+\epsilon)$ is $\big((\sqrt{x}-\epsilon)^2,(\sqrt{x}+\epsilon)^2\big)$. Therefore
$$\Delta(x)=\min\big(x-(\sqrt{x}-\epsilon)^2,(\sqrt{x}+\epsilon)^2-x\big)=x-(\sqrt{x}-\epsilon)^2=2\epsilon\sqrt{x}-\epsilon^2$$
Case two: $\sqrt{x}\leq\epsilon$, i.e. $x<\epsilon^2$. Now we just look at the inverse image of $[0,\sqrt{x}+\epsilon)$, so it's only the right-hand boundary of the inverse image that matters for controlling the variation of $f$. The right-hand boundary is still $(\sqrt{x}+\epsilon)^2$, so $\Delta(x)=(\sqrt{x}+\epsilon)^2-x=2\epsilon\sqrt{x}+\epsilon^2$.
CORRECTION (1/27/16): John Ma correctly points out in the comments that the correct calculation for $\sqrt{x}$ is the lower semicontinuous function
$$\Delta(x)=\begin{cases}2\epsilon\sqrt{x}-\epsilon^2&x\color{red}{\geq}\epsilon^2\\2\epsilon\sqrt{x}+\epsilon^2&0\leq x\color{red}{<}\epsilon^2\end{cases}$$
rather than my original upper semicontinuous
$$\Delta(x)=\begin{cases}2\epsilon\sqrt{x}-\epsilon^2&x\color{blue}{>}\epsilon^2\\2\epsilon\sqrt{x}+\epsilon^2&0\leq x\color{blue}{\leq}\epsilon^2\end{cases}$$
Avoiding the trivial case, assume that $f$ is nonconstant and $\epsilon <\sup f - \inf f$. So $\Delta$ is finite valued. One can show that
To see this, let $x\in [a, b]$ and $\delta <\Delta(x)$ be arbitrary. By definition, whenever $z\in [a,b]$ and $|z-x|< \delta$, we have $|f(z)-f(x)|< \epsilon$. Then for all $z\in [a,b]$ so that $|z-x| <\delta$, $$(z - (\delta-|z-x|), y + (\delta' - |z-x|))\subset [x-\delta, x+\delta],$$
Note that there is $\epsilon' <\epsilon$ so that $|y-x| \le \delta $ implies $$\tag{1} |f(y) - f(x)|<\epsilon'$$ (Use compactness of $[x-\delta, x+\delta]$ and $\delta <\Delta(x)$ to argue that).
Now as $f$ is continuous at $x$, there is $\eta >0$ so that if $|z-x|<\eta$, then $$\tag{2} |f(z) - f(x)|<\epsilon - \epsilon '.$$ Thus if $|z-x|<\eta$ and $y\in [a, b]$ and $|y-z|< \delta - |z-x|$, we have by $(1)$ and $(2)$,
$$|f(z) - f(y)| \le |f(z) - f(x)| + |f(x) - f(y)| < \epsilon -\epsilon' +\epsilon' = \epsilon.$$
Thus if $|z-x|<\eta$, then
$$\Delta (z) \ge \delta - |z-x|\Rightarrow \liminf_{z\to x} \Delta (z) \ge \delta.$$ As $\delta <\Delta(x)$ is arbitrary, we are done.
Note now that Theorem !@ implies
Proof: If $f$ is constant, then it is obviously true. Assume now that $f$ is not constant. Let $\epsilon >0$ be arbitrary, by choosing it smaller, we assume that $\epsilon <\sup f - \inf f$. Then $\Delta$ is finite valued. By Theorem !@, $\Delta :[a,b] \to \mathbb R$ is lower-semicontinuous. Let $\delta'$ be the minimum value of $\Delta$, which is attained at $x_0\in [a,b]$ (This is where we use Theorem !@ and compactness of $[a,b]$). Note $\delta'=\Delta(x_0) >0$ as $f$ is continuous. Then letting $\delta =\frac 12 \delta'$ we have that
$$|f(y) - f(x)| <\epsilon$$
whenever $x, y\in [a,b]$ and $|y-x|<\delta$. Thus $f$ is uniformly continuous.