Question on proof of existence of a maximum of a continuous function on a closed set. - Proof inspiration

112 Views Asked by At

I'm trying to get stronger in constructing my mathematical arguments and so through that process I attempt to prove as much as possible the theorems that are presented in the textbook I'm reading from, in this case that is Spivak's Calculus. So when attempting the following theorem and not succeeding, looking at the proof, Spivak applied the following trick:

enter image description here

enter image description here

The function $$g(x) = \frac{1}{\alpha - f(x)}$$

does seem esoteric, yet and still it had to come from somewhere. It had to come from some line of thinking that allowed Spivak to introduce this function and know the consequences of introducing it. My question is what line of thinking was Spivak looking at this question with ? What kind o fquestions did he ask himself when working through this ?

As an example of what I mean, I approached the question in this way:

I KNOW that $f$ is continuous on a closed set. This means that the function is bounded. I would then probably write out the $\delta - \epsilon$ definition of continuity. I would also ask myself what I WANT. In this case we are trying to show the existence of a value, $y$, in our closed interval. I would myself most likely eventually get to the point of concluding that it suffices to show $\alpha = f(y)$. But then I would ask myself "what or how can we show such a thing on an abstract set?".........and I would be stuck.......What/How did Spivak proceed from here? I can say even if I had stayed with it for a day or a few days I probably would've never thought of introducing a new function. So what line of reaoning would bring about such a "moment of brilliance"?

1

There are 1 best solutions below

2
On BEST ANSWER

In this case it comes from looking at what it means for $\alpha$ to be the least upper bound of $\big\{f(x):x\in[a,b]\big\}$.

Since $\alpha=\sup\big\{f(x):x\in[a,b]\big\}$, we know that for each $\epsilon>0$ there is an $x_\epsilon\in[a,b]$ such that $\alpha-f(x_\epsilon)<\epsilon$. Thus, we can make $\alpha-f(x)$ as small as we like by choosing a suitable $x\in[a,b]$. But that immediately tells us that we can make $\frac1{\alpha-f(x)}$ as big as we like by choosing a suitable $x\in[a,b]$. Oops!