Lipschitz function with right derivative =0

533 Views Asked by At

$f:\mathbb{R}\to\mathbb{R}$ is a Lipschitz function satisfying $$\forall x\in\mathbb{R}, \lim_{n\to\infty} n(f(x+\frac1n)-f(x))=0$$ I want to show that $f$ is constant everywhere. I proved $$\lim_{y\to x^+} \frac{|f(y)-f(x)|}{|y-x|}=0$$ by choosing $y\in[x+\frac1{n+1},x+\frac1n]$ and using Lipschitz condition. How can I finish the rest?

2

There are 2 best solutions below

0
On BEST ANSWER

We can prove a more general result. Suppose $f:\mathbb R\to \mathbb R$ and $x\in \mathbb R.$ Let's say $Df(x) =0$ if there is a sequence $x_n\to x^+$ such that

$$\frac{f(x_n)-f(x)}{x_n-x} \to 0.$$

Thm: If $f:\mathbb R\to \mathbb R$ is continuous and $Df(x) = 0$ for all $x\in \mathbb R,$ then $f$ is constant.

Proof: Suppose $a<b.$ We want to show $f(b) = f(a).$ To prove this, let $\epsilon>0.$ It suffices to show $|f(b)-f(a))|\le \epsilon(b-a).$

Set $E = \{x\in [a,b]: |f(x)-f(a)|\le \epsilon(x-a)\}.$ Clearly $a\in E,$ so $E$ is nonempty. The continuity of $f$ shows $E$ is closed. Suppose, to reach a contradiction, that $b\not \in E.$ Then $E$ contains a largest element $x_0 < b.$ Because $x_0\in E,$ we have $|f(x_0) - f(a)|\le \epsilon(x_0-a).$ But since $Df(x_0) = 0,$ there is $x_1, x_0<x_1 < b,$ such that

$$|f(x_1)-f(x_0)| \le \epsilon(x_1-x_0).$$

We then have

$$|f(x_1)-f(a)| \le |f(x_1)-f(x_0)| +|f(x_0)-f(a)|$$ $$ \le \epsilon(x_1-x_0)+\epsilon(x_0-a) = \epsilon(x_1-a).$$

Thus $x_1\in E,$ and $x_1>x_0,$ contradicting the definition of $x_0.$ This shows $b\in E,$ proving the theorem.

0
On

This kind of question has been asked and answered here before and a search would doubtless find the necessary ideas. This asks whether a certain function with a very weak type derivative equal to zero at every point would be constant.

Unfortunately the assumption that $f$ is Lipschitz is misleading and would foul up the search. As @zhw points out, continuity is enough and, moreover, all one needs is that at every point there is a right-hand derived number equal to zero. (It doesn't have to be translation invariant as here using the same sequence $\frac1n$ at each point.)

The correct historical reference for this problem is a monotonicity theorem that is implicit here. It is due to Dini and is quite ancient. His proof is essentially just the same "last point" argument that @zhw has just presented.

Theorem (Dini 1878). If $f:[a,b]\to\mathbb R$ is continuous and $D^+ f(x) \geq 0$ for all $a\leq x < b$ then $f$ is nondecreasing. [Fondamenti per la teorica delle funzioni di variabili reali (Pisa, T. Nistri, 1878)]

Note the use of the upper Dini derivative here, which seems a very weak condition. The assumption in the given problem ensures that $D^+ f(x) \geq 0$ everywhere. The assumption that $f$ is Lipschitz is overkill: we need only continuity. Thus $f$ is nondecreasing. But $-f$ satisfies all the same conditions so $-f$ is also nondecreasing. And one concludes $f$ is constant. But we realize here that this is about monotonicity and not about constancy after all. Generally when someone asks you to prove a function is constant there is a pair of monotonicity problems lurking.

The standard modern reference is Saks, Theory of the Integral, pp. 203-204 where he gives, in addition to the original, a useful refinement due to Zygmund.