Let $f: \mathbb R \to \mathbb R$ is a continuous function such that $\lim_{h \to 0^+} \frac{f(x+2h)-f(x+h)}{h}=0$. Prove that $f$ is constant.
This is a question taken from the book Putnam and Beyond, question 389. I looked at the solution, which goes roughly as follows. Suppose on the contrary that there is $a < b$ with different image, say $f(a) > f(b)$. Define $g(x) =f(x)+ \lambda x$ where $\lambda >0$ is sufficiently small to make $g(a) > g(b)$. We have that $\lim_{h \to 0^+} \frac{g(x+2h)-g(x+h)}{h}=\lambda$. Since $g$ is continuous on $[a,b]$ a closed and bounded interval, $g$ attains a maximum, say $c \neq b$. fix $\epsilon >0, \epsilon < \lambda$. Then by continuity there is $\delta$ such that $0 < \lambda - \epsilon < \frac{g(x+2h)-g(x+h)}{h} < \lambda + \epsilon$ for all $0 < h< \delta$. Fix $0< h_0<min\{\delta, (b-c)/2\}$. Then $g(c+2h_0) >g(c+h_0) >...>g(c+\frac{h_0}{2^m})$... for any natural number $m$ so that, by taking the limit as $m$ goes to infinity, $g(c+2h_0) > g(c)$ contradicting the maximality of $g$ on $[a,b]$. Thus it must be that our initial assumption was false, and hence our conclusion.
I get the proof, but I was wondering what is the intuition behind this solution. How would a problem solver come up with the idea for this solution? Please be more specific than 'it takes practice to recognize pattern'.
We want to show this by contradiction. So we will have to assume $a < b$ which are two points at which there is a difference of values. All we know is that $f$ is continuous and $[a,b]$ is compact : this has to be suitably used. Of course, this gives the existence of a maximum and minimum value of $f$ attained on $[a,b]$.
What we do is this : since we are assuming $f(a) > f(b)$, what we ideally want to do, is to show that the maximum, which is not attained at $b$ definitely, is not attained anywhere.
What if we were to experiment with $f$?
If $c$ were such a maximum point, then nearby $c$, we have , by interpretation of the limit being zero, that given $d$ just above $c$, the values of $f(d)$ and $f(\frac{c+d}{2})$ are close. A rather naive idea would then be as follows : through some iteration, that is to say, $f(\frac{c+d}{2})$ and $f(\frac{\frac{c+d}{2} + c}{2})$ are close, and now take another arithmetic mean, and a sum of "close differences" should be close etc., we want to try to get to the conclusion in some manner. There are two roadblocks:
Firstly, the iteration and limit process does not help us conclude that $f(d)$ is close to $f(c)$, because an infinite sum of very small quantities may not even be finite, let alone small enough or something.
Second and more importantly, even if $f(d)$ was close, it could be either side of $f(c)$ : we would prefer if we could somehow get $f(d) > f(c)$, because that is the contradiction we seek. This cannot happen here.
That, is the need for a "contrived" function. What this contrived function must do, is to make the given limit strictly positive for all $x$ in $[a,b]$. What this small change does, is magic : it suitably modifies $f$ in the interval $[a,b]$, so that $g(d)$ ends up not being just close to $g(\frac {c+d}2)$, but also larger. Now,you can throw both points out of the window : there are no sum of small quantities to be taken, and inequalities are preserved in the limit!
In other words, the condition given to us, is not manipulable, because the fact that the limit is zero at every point, and not any other positive quantity, hinders us from contradicting the existence of maxima/minima on a compact interval like $[a,b]$. So what we have done with the creation of $g$, is created a similar situation, but the crucial point I have emphasised : the limit being positive, which has helped, in that we have been able to contradict the situation.
Now, the question you may ask is this : could we have chosen some other $g$ maybe? Well, we might have : all we needed to ensure, was that $g$ preserved $g(a) > g(b)$, and that the limit which is given to be zero for $f$, should be strictly positive on $[a,b]$. A linear addition does this best : I am sure we can find other candidates that may have done the same.
With this , I hope you understand the following :
For once, we sat in anguish at seeing a limit being zero : the inability to assert the sign of the differential quotient was giving trouble. This is sorted out rather elegantly by $g$.
For an alternate approach, I am sure that there must be some theorem involving lots of little lemmas, that will show that $f$ is differentiable. But when I think of it, I will get back to you.