Spivak Rising Sun Lemma

270 Views Asked by At

Suppose that all points of $(a,b)$ are shadow points, that is $\forall x \in (a,b) \space \exists y\space(y>x \mbox{ and } f(y)>f(x))$.

For all $x\in(a,b)$, prove that $f(x)\leq f(b)$.

let $A=\{y: x\leq y \leq b \mbox{ and } f(x)\leq f(y)\}$. Then in the solutions manual he proceeds:

Notice that we must have $f(x)\leq f(\sup A)$ because $f$ is continuous at $sup A$ and there are points $y$ arbitrary close to $\sup A$ with $f(x)\leq f(y)$. (A simple $\varepsilon - \delta$ argument is being suppressed).

How does $f(x)\leq f(\sup A)$ follow from the continuity of $f$? It would've been very helpful if he had not suppressed the argument.

2

There are 2 best solutions below

5
On BEST ANSWER

Edit: I think I better understand the book solution. It's a bit confusing because of Spivak's choice of $x$ as some arbitrary fixed point in $(a,b)$. Up to this point we've grown accustomed to looking at how things change as we change $x$, but here $x$ is a fixed point, at least for this stage of the argument.

The part you're asking about boils down to showing that $\alpha = \sup A$ is in $A$, i.e. $f(\alpha) \geq f(x)$.

Suppose instead $f(\alpha) < f(x)$

Take $\varepsilon = f(x) - f(\alpha) > 0$.

Because $f$ is continuous, $f$ can be made arbitrarily close to $f(\alpha)$ for for all $y$ sufficiently close to $\alpha$.

$\lim_{y \to \alpha-}f(y) = f(\alpha)$

For our $\varepsilon$ there exists some $\delta > 0$ such that for all $y$ if $$\alpha - \delta < y \leq \alpha \text{, then } |f(y) - f(\alpha)| < \varepsilon$$

or

$$f(y) - f(\alpha) < \varepsilon $$ $$f(y) - f(\alpha) < f(x) - f(\alpha)$$ $$f(y) < f(x)$$

If this were true then there would be some $y_0$ in $(\alpha - \delta, \alpha)$ such that: $$f(y) < f(x) \text{ for all } y \text{ in } [y_0, \alpha]$$

But this contradicts $\alpha$ being the least upper bound of $A$. ($y_0$ would be an upper bound less than $\alpha$).

Therefore, $f(\alpha) \not\lt f(x)$

You could show the same result by building the continuous function $g(y) = f(y) - f(x)$ and using Theorem 6-3 to show $g(y) < 0$ near $\alpha$...

Original post I think I found a proof that is a bit simpler. Unfortunately it doesn't directly use any of the supremum/infimum arguments which are the whole point of the chapter, so it may not be helpful.

I haven't been back to check, but it's likely this is identical to Spivak's solution, only phrased differently...

First, we know that $f(b) \geq f(x)$ for all $x > b$. (If this were not the case, $f(b)$ would be a shadow point.)

Next, let's look at the interval $[a,b]$.

Suppose there exists some $x_0$ in (a,b) with $f(x_0) > f(b)$.

In this case, take the interval $[x_0,b]$. $f$ is continuous, so it must take on its max value for some $x$ in this interval, i.e. there exists some $x_1$ in $[x_0,b]$ such that $f(x_1) \geq f(x_0) > f(b)$.

$f(x_1)$ is a max on $[x_1, b]$ and it's certainly greater than all values of $f(x)$ to the right of $b$, therefore $x_1$ must not be a shadow point.

Since this is not the case, we must have $f(x) \leq f(b)$ for all $x$ in $(a,b)$.

Given this, if $f(x) = f(b)$ for any $x$ in $(a,b)$, again, $x$ would not be a shadow point. Therefore, $f(x) < f(b)$ for all $x$ in $(a,b)$.

Finally we look at $a$. We must have $f(a) \geq f(b)$, otherwise $a$ would be a shadow point.

If $f(a) > f(b)$ then we have

$$f(a) > \frac{f(a) + f(b)}{2} > f(b)$$

and the IVT tells us there exists some $x$ with $a < x < b$ such that

$$f(x) = \frac{f(a) + f(b)}{2} > f(b)$$

but this is a contradiction because we know $f(x) < f(b)$ for all $x$ in $(a,b)$.

Therefore, we must have $f(a) = f(b)$.

0
On

This is really not difficult and it is better that you understand this without using any $\epsilon, \delta$.

You should note that if $y\in A$ then $f(y) \geq f(x) $. Let us suppose on the contrary that (P) $f(\sup A) <f(x) $ then by continuity (Q) there is a neighborhood of $\sup A$ where values of $f$ are less than $f(x) $ and thus this entire neighborhood contains no points of $A$. This clearly contradicts the definition of $\sup A$ (every neighborhood of supremum must contain some point of the set).

You should be able to grasp why the continuity of $f$ allows you to deduce the conclusion (Q) from assumption (P). In simple language by continuity we can find a neighborhood of $\sup A$ such that the values of $f$ are much nearer to $f(\sup A) $ than to $f(x) $ and hence these values are all less than $f(x) $.

To simplify it further consider the obvious. If we take numbers which are too close to $1$ rather than to $2$ then they will be less than $2$. And the statement remains true if $2$ is replaced by $1.5$ or $1.1$ or $1.001$ or any specific number which is greater than $1$.

The $\epsilon, \delta $ stuff is just a translation of the obvious things explained above in symbols.