counterexample to a "theorem" on continuity of largest deltas for continuous functions $f:[a,b]\to\mathbb{R}$

404 Views Asked by At

"Theorem 12" in these notes states the following (verbatim):

Let $f:[a,b]\to\mathbb{R}$ be continuous and let $\epsilon>0$. For $x\in[a,b]$, let $$\Delta(x)=\sup\left\{\delta\,\,|\,\,\text{for all}\,y\in[a,b]\,\text{with}\,|x-y|<\delta,\,|f(x)-f(y)|<\epsilon\right\}$$ Then $\Delta$ is a continuous function of $x$.

In other words (this is my paraphrase of the imprecise statement about the supremum), $\Delta(x)$ is the largest $\delta$ we can pick at $x$ that keeps the variation of $f$ within $\epsilon$. (Of course, for a specified $x$, the set of $\delta$'s is nonempty because $f$ is continuous.)

Following the statement of the theorem, we're invited to use it to prove that continuous functions on closed and bounded intervals are uniformly continuous.

But I think this theorem is false. Take $f(x)=\sqrt{x}$ on $[0,1]$ and $\epsilon=0.5$. I claim

$$\Delta(x)=\begin{cases}\sqrt{x}-0.25&x>0.25\\\sqrt{x}+0.25&0\leq x\leq0.25\end{cases}$$

which has a jump discontinuity at $x=0.25$. This follows from a straightforward calculation I carried out for arbitrary $\epsilon$, which gives

$$\Delta(x)=\begin{cases}2\epsilon\sqrt{x}-\epsilon^2&x>\epsilon^2\\2\epsilon\sqrt{x}+\epsilon^2&0\leq x\leq\epsilon^2\end{cases}$$

with a jump discontinuity at $x=\epsilon^2$. This contradicts the quoted theorem.

(As a side note, this computation gives a very nice brute-force way to discover that choosing $\delta=\epsilon^2$ suffices for the uniform continuity of $\sqrt{x}$ on $[0,\infty)$. Indeed it shows more: this choice of $\delta$ is the largest we can make, because it is the infimum of $\Delta$. I wish I'd done this computation in college.)

I have two questions:

  1. Am I missing something, or does my counterexample show this claim is false? I've reworked the calculation and don't believe I'm wrong, but I could spell out the details if someone asks.
  2. If the claim is false, can it be repaired to do what the instructor wanted to do with it?

(Please note that with question 2 I'm not asking for any old proof of the special case of the Heine-Cantor theorem that says continuous functions on closed and bounded intervals are uniformly continuous. I know the proof of the general result in arbitrary metric spaces. I'm asking: what theorem could the instructor possibly have had in mind instead of Theorem 12, as stated?)

ADDED: a sketch of the calculation of $\Delta(x)$ for $\sqrt{x}$ on $[0,\infty)$ for arbitrary $\epsilon$.

Break the work into cases.

Case one: $\sqrt{x}>\epsilon$, i.e. $x>\epsilon^2$. The inverse image of the interval $(\sqrt{x}-\epsilon,\sqrt{x}+\epsilon)$ is $\big((\sqrt{x}-\epsilon)^2,(\sqrt{x}+\epsilon)^2\big)$. Therefore

$$\Delta(x)=\min\big(x-(\sqrt{x}-\epsilon)^2,(\sqrt{x}+\epsilon)^2-x\big)=x-(\sqrt{x}-\epsilon)^2=2\epsilon\sqrt{x}-\epsilon^2$$

Case two: $\sqrt{x}\leq\epsilon$, i.e. $x<\epsilon^2$. Now we just look at the inverse image of $[0,\sqrt{x}+\epsilon)$, so it's only the right-hand boundary of the inverse image that matters for controlling the variation of $f$. The right-hand boundary is still $(\sqrt{x}+\epsilon)^2$, so $\Delta(x)=(\sqrt{x}+\epsilon)^2-x=2\epsilon\sqrt{x}+\epsilon^2$.

CORRECTION (1/27/16): John Ma correctly points out in the comments that the correct calculation for $\sqrt{x}$ is the lower semicontinuous function

$$\Delta(x)=\begin{cases}2\epsilon\sqrt{x}-\epsilon^2&x\color{red}{\geq}\epsilon^2\\2\epsilon\sqrt{x}+\epsilon^2&0\leq x\color{red}{<}\epsilon^2\end{cases}$$

rather than my original upper semicontinuous

$$\Delta(x)=\begin{cases}2\epsilon\sqrt{x}-\epsilon^2&x\color{blue}{>}\epsilon^2\\2\epsilon\sqrt{x}+\epsilon^2&0\leq x\color{blue}{\leq}\epsilon^2\end{cases}$$

3

There are 3 best solutions below

1
On BEST ANSWER

Avoiding the trivial case, assume that $f$ is nonconstant and $\epsilon <\sup f - \inf f$. So $\Delta$ is finite valued. One can show that

Theorem !@: The function $\Delta$ is lower-semicontinuous. That is, $$\liminf_{z\to x} \Delta (z) \ge \Delta (x).$$

To see this, let $x\in [a, b]$ and $\delta <\Delta(x)$ be arbitrary. By definition, whenever $z\in [a,b]$ and $|z-x|< \delta$, we have $|f(z)-f(x)|< \epsilon$. Then for all $z\in [a,b]$ so that $|z-x| <\delta$, $$(z - (\delta-|z-x|), y + (\delta' - |z-x|))\subset [x-\delta, x+\delta],$$

Note that there is $\epsilon' <\epsilon$ so that $|y-x| \le \delta $ implies $$\tag{1} |f(y) - f(x)|<\epsilon'$$ (Use compactness of $[x-\delta, x+\delta]$ and $\delta <\Delta(x)$ to argue that).

Now as $f$ is continuous at $x$, there is $\eta >0$ so that if $|z-x|<\eta$, then $$\tag{2} |f(z) - f(x)|<\epsilon - \epsilon '.$$ Thus if $|z-x|<\eta$ and $y\in [a, b]$ and $|y-z|< \delta - |z-x|$, we have by $(1)$ and $(2)$,

$$|f(z) - f(y)| \le |f(z) - f(x)| + |f(x) - f(y)| < \epsilon -\epsilon' +\epsilon' = \epsilon.$$

Thus if $|z-x|<\eta$, then

$$\Delta (z) \ge \delta - |z-x|\Rightarrow \liminf_{z\to x} \Delta (z) \ge \delta.$$ As $\delta <\Delta(x)$ is arbitrary, we are done.

Note now that Theorem !@ implies

Theorem 11 : Every continuous function $f:[a,b] \to \mathbb R$ is uniformly continuous.

Proof: If $f$ is constant, then it is obviously true. Assume now that $f$ is not constant. Let $\epsilon >0$ be arbitrary, by choosing it smaller, we assume that $\epsilon <\sup f - \inf f$. Then $\Delta$ is finite valued. By Theorem !@, $\Delta :[a,b] \to \mathbb R$ is lower-semicontinuous. Let $\delta'$ be the minimum value of $\Delta$, which is attained at $x_0\in [a,b]$ (This is where we use Theorem !@ and compactness of $[a,b]$). Note $\delta'=\Delta(x_0) >0$ as $f$ is continuous. Then letting $\delta =\frac 12 \delta'$ we have that

$$|f(y) - f(x)| <\epsilon$$

whenever $x, y\in [a,b]$ and $|y-x|<\delta$. Thus $f$ is uniformly continuous.

3
On

Provided $\varepsilon < \frac{\max{f}-\min{f}}{2}$, the function is continuous: let $x\in[a,b]$. Assume, for a contradiction, that there is some sequence $x_n \rightarrow x$ s.t. $\Delta(x_n)\geq \Delta(x)+ \alpha$ for some $\alpha>0$. But then, for large enough $n$, we have for all $y\in (x-\Delta(x)-\alpha, x + \Delta(x) + \alpha)$, for large enough $n$, $y \in (x_n -\Delta(x_n), x_n+ \Delta(x_n))$. Thus: $|f(y)- f(x)| \leq |f(y)-f(x_n)| + |f(x_n)- f(x)| < \varepsilon +|f(x_n)-f(x)|$. The last term tends to $\varepsilon$, so $|f(x)-f(y)<\varepsilon$, and so $\Delta(x)\geq \Delta(x) + \alpha$, a contradiction. A similar calculation shows the lower bound on the limit of $\Delta(x_n)$, showing that $\Delta$ is continuous.

Edit As John Ma pointed out, this answer is incorrect. $\Delta$ is only lower semi-continuous, and the OP's example shows that it is not necessarily continuous.

0
On

John Ma's answer, together with my calculation for $\sqrt{x}$, settles the issue: the notes are wrong (on any reasonable interpretation of the supremum that makes the function $\Delta(x)$ well-defined), and lower semicontinuity is the best that can be concluded in general. But even this weaker result can be used to prove the fact—call it Heine's theorem—that continuous functions $f:[a,b]\to\mathbb{R}$ are uniformly continuous.

I've done some digging in the literature, though, and found some fascinating history on this question. A paper by Lennes from 1905 gives a counterexample similar to my computation for $\sqrt{x}$, except for $\sin x$ on $[0,\pi]$. More recent results in the American Mathematical Monthly show that while picking the largest delta gives only a lower semicontinuous function in general, we can nevertheless always choose the deltas continuously for any continuous function $f$. (In fact, if $f$ is uniformly continuous, we can choose the deltas uniformly continuously, too.) This opens onto the subject of "continuous selections."

I leave my findings here for those who might be interested, since these questions are not treated systematically in any standard analysis reference.


To begin with, the notes I quoted are hinting at the idea of Lüroth's alternative proof of Heine's theorem, given in

  • Lüroth, "Bemerkung über gleichmässige Stetigkeit," Mathematische Annalen, September 1873, Vol. 6 No. 3. Available here.

The idea is to deduce Heine's theorem from the continuity of a positive function that is very similar to $\Delta(x)$. The difference is that $\Delta(x)$ measures the variation $f(y)$ from a fixed value $f(x)$ for all $y$ within $\delta$ of $x$, while Lüroth's function, call it $\Delta_L(x)$, measures the variation of $f$ over all $y$ within $\delta$ of $x$:

$$\Delta_L(x):=\sup\left\{\delta\,\,|\,\,\forall x_1\forall x_2(x_1,x_2\in(x-\delta,x+\delta)\cap[a,b]\implies|f(x_1)-f(x_2)|<\epsilon) \right\}$$

This function does have to be continuous; as a result, it assumes its infimum on $[a,b]$. And because the function is positive, this infimum is precisely the delta we need to prove uniform continuity. (Indeed, it is the best such delta.) The trick for getting continuity of the deltas is that we use the delta guaranteed by the oscillation definition of continuity rather than the delta in the more familiar definition of continuity. As Lüroth says, "Diese Definition der Stetigkeit stimmt offenbar überein mit der gewöhnlichen." Despite the equivalence of these notions of continuity at a point, the functions we get from their largest deltas are not.

A more contemporary version of this argument is given in

  • Teodora-Liliana Radulescu, Vicentiu D. Radulescu, and Titu Andreescu, Problems in Real Analysis: Advanced Calculus on the Real Axis, page 146. Available here.

As John Ma shows, though, we can prove Heine's theorem by taking the infimum of the largest deltas even if we use the deltas from the standard definition of continuity. In this case we don't have continuity, but we don't need it. Lower semicontinuity is enough. One wonders whether Lüroth realized this.

Veblen and Lennes give both Heine's proof and Lüroth's proof (pp. 88-90) in their influential book

  • Introduction to Infinitesimal Analysis, 1907. Available here.

They highlight the difference between $\Delta_L(x)$ and $\Delta(x)$, and a footnote on page 90 refers to a short note by Lennes that answers my question explicitly with several counterexamples: $\Delta(x)$ needn't be continuous. (He does not show lower semicontinuity.)

  • Lennes, "Remarks on a proof that a continuous function is uniformly continuous," Annals of Mathematics, 1905, Vol. 6 No. 2. Available here.

For the first of his four counterexamples, Lennes shows $\Delta(x)$ for $\sin x$ on $[0,\pi]$ and $\epsilon=\sqrt{3}/2$ has a jump discontinuity at $x=\pi/3$. He makes a minor error in claiming that $\Delta(\pi/3)=2\pi/3$ while the limit from the right is $\pi/3$. Actually, $\Delta(\pi/3)=\pi/3$ and the limit from the left is $2\pi/3$. I made the same error in my computation for $\sqrt{x}$.

This paper settles my original question: $\Delta(x)$ isn't continuous. But can we pick the deltas continuously across $x$ if we give up the requirement that we pick the best deltas? The answer is yes; in fact, we can also get a continuous function as both $x$ and $\epsilon$ vary. Several more recent papers discuss these results:

  • Seidman and Childress, "A Continuous Modulus of Continuity," American Mathematical Monthly, Vol. 82, No. 3 (Mar., 1975), pp. 253-254.
  • Guthrie, "A Continuous Modulus of Continuity," American Mathematical Monthly, Vol. 90, No. 2 (Feb., 1983), pp. 126-127.
  • Enayat, "$\delta$ as a Continuous Function of $x$ and $\epsilon$," American Mathematical Monthly, Vol. 107, No. 2 (Feb., 2000), pp. 151-155.
  • De Marco, "For Every $\epsilon$ There Continuously Exists a $\delta$," American Mathematical Monthly, Vol. 108, No. 5 (May, 2001), pp. 443-444.

See also

  • Reem, "New proofs and improvements of theorems in classical analysis," arXiv:0709.4492 [math.CA]. Available here.

which gives an excellent discussion of the optimal delta.