Prove that if $f$ is continuous and $f$ is equal to its integral from 0 to $x$, then $f(x) = 0$ for all $x$

917 Views Asked by At

The question is basically we have a continuous function $f:R\to R$ in which we have $$f(x)=\int_0^xf$$ for all real x, and it asks us to show that $f(x) = 0$ for all x real.

The major drawback for me is that I only have theorems about derivatives and Taylor/power series (without the use of integrals), some integral propeties (like the ones which relates absolute values, sums, subtractions, I haven't got theorems about change of variable or integration by parts, for example) and the first statement from the Fundamental Theorem of Calculus to use: (no mention to differential equations nor the second statement of the FTC)

If $f:[a,b]\to R$ is a continuous function and $F:[a,b]\to R$ is defined as $$F(x) = \int_a^xf$$ for all $x\in[a,b]$. Then we have $F'(x) = f(x)$ for all $x\in (a,b)$.

All those integrals are defined with the Darboux definition which takes into account $sup$ and $inf$ of $f$ in each interval that we partition and then take the $sup$ and $inf$ of the lower and upper sums to calculate the integral. (So we're not using the Rienmann definition)

I thought that I could use the fact that the function is infinitely-derivable since $f'(x) = f(x)$ because of the first part of FTC, then $f^{(n)}(x) = f(x)$ and, since $f(0) = 0$ (because it's a degenerate interval for the integral), we'd have $f^{(n)}(0) = 0$ for all natural $n$, then I thought that I could use the Taylor series for the function $f$ around $x=0$ which would be simply $T(x) = 0$ for all real $x$, but then I'd need to proof that $f$ is analytic for all real $x$, which I don't know how to... And the only theorem that my book has about that idea of being analytic is this one:

Let $f:I\to R$ be an infinitely-derivable function in a interval $I$ which contains $0$ in its interior. Let $(-r,r)$ be the biggest open interval in that form such that it's contained in $I$ and such that for each $c$ with $0<c<r$ we have: $$\lim_{n\to\infty}\left[\frac{1}{n!}M_n(c)c^n \right] = 0,$$ where $M_n(c)$ is the maximum of the function $f^{(n)}(x)$ in the interval $[-c,c]$. Then the Taylor series of the function $f(x)$, relatively to $x=0$, converges to $f(x)$ in the interval $(-r,r)$.

But honestly, I don't exactly know how to proceed with my idea of proof using that theorem... any idea would be very welcome and appreciated!

4

There are 4 best solutions below

1
On

Hint: First show that it is zero on any interval $[0,b]$.

We have $f(0)=0$ and $f'(x)=f(x)$ so $|f'(x)|\leq |f(x)|$. Now this is an exercise in Rudin's Analysis book.

Let $M_0=\sup|f(x)|$ and $M_1=\sup|f'(x)|$ on $[0,b]$. Fix $x_0\in[0,b]$. For any $x\in[0,x_0]$, \begin{align} |f(x)|=|f(x)-f(0)|\stackrel{MVT}{=}|f'(c)|x\leq x_0|f(c)|\leq x_0M_0 \end{align} Taking the supremum on the left, we have $M_0 \leq x_0 M_0$, which either implies $M_0=0$, if you ensure that $x_0<1$. You can repeat the proof, starting at $x_0$ until you fill up the whole interval. Since $b$ is arbitrary, the claim holds on $[0,\infty)$. A similar proof holds in the other direction.

EDIT: This assumes you don't know the function $e^x$. Obviously, The solution to $f'(x)=f(x)$ is $f(x)=Ce^x$ and $f(0)=0$ ensures $C=0$.

0
On

We have $f'(x)=f(x)$. Consider the interval $[0,n]$ for some fixed but arbitrary $n\in \mathbb N.$ Set $M=\max_{0\le x\le 1/2}|f(x)|$. Then, for some $0<c<1/2,\ |f(x)|\le |f'(c)|x=|f(c)|x\le \frac{M}{2}$. Suping the left-hand side over $x\in [0,1/2],$ we get $M\le M/2$ which forces $M$ to zero. Now, repeat the argument on the interval $[1/2,1].$ Then, on $[1,3/2]$, etc. After $2n$ steps, we cover the entire interval $[0,n]$ and since $n$ is arbitrary, the result follows.

For another more intuitive proof, using the fact that we already know that we are dealing with the initial value problem $y=y':\ y(0)=0$, suppose there is an interval $(a,b)\subseteq [0,x]$ such that $f(a)=0$ but $f(y)\neq 0$ for $a<y<b.$ It is enough to show that this is impossible.

Without loss of generality, $f(y)>0$. This means that we can define $g(y)=\ln f(y)$ on this interval. Now note that $g'(y)=\frac{f'(y)}{f(y)}=1$ and that $g$ is increasing on this interval since $f'(y)=f(y)>0$ and $\ln $ is increasing.

Using this, together with the Mean Value Theorem, we have for $a<y<\frac{a+b}{2},$

$g(\frac{a+b}{2})-g(y)=g'(c)(\frac{a+b}{2}-y)=\frac{a+b}{2}-y<b-a$ so $g(y)\ge g(\frac{a+b}{2})-(b-a).$ Letting $y\to a$ gives us a contradiction.

An identical argument works for $\frac{a+b}{2}<y<b.$

1
On

But, isn't this Gronwall...?
Like, letting $G(x)= e^{-x} \int_{0}^x f(s)ds$
$G'(x)= e^{-x}( f(x)- \int_{0}^x f(s)ds)=0$
Done.

5
On

No MVT, no derivatives , no exponential function etc. This proof uses only basic properties of continuous functions.

Let us first show by induction that $f \equiv 0$ on $[0,n]$ for each positive integer $n$.

Let $M$ be the maximum of $f$ on $[0,1]$. There exist $x_0 \in [0,1]$ such that $f(x_0)=M$. Now $M=f(x_0)=\int_0^{x_0} f(y)dx \leq \int_0^{x_0} M dx=M x_0 \leq M$. Thus equality holds throughout and this implies that $f$ has the constant value $M$ on $[0,x_0]$. Since $f(0)=0$ it follows that $M=0$ and $f \leq 0$ and since $-f$ also satisfies the hypothesis we get $f \geq 0$ too. Hence $f \equiv 0$ on $[0,1]$

Suppose we know that $f \equiv 0$ on $[0,n]$. Consider the maximum value $M_n$ of $f$ on $[n,n+1]$ and repeat the argument above. Here we get $M_n=f(x_n)=\int_n^{x_n} f(y)dx \leq \int_n^{x_n} M_n dx=M_n (x_n-n) \leq M_n$ and we can conclude as before that $f\equiv 0$ on $[n, n+1]$. The induction proof is now complete.

Now consider $[-1,0]$. Let $M$ be the maximum of $|f(x))|$ in this interval. Then there exists $y \in [-1,0]$ with $|f(y)|=M$. Now $M=|f(y)|=|\int_0^{y} f(t)dt| \leq M|y| \leq M$. This implies that $|f(t)|$ has the constant value $M$ on $[y,0]$. Since $f(0)=0$ this constant value muse be $0$ so $M=0$. This implies that $f\equiv 0$ on $[-1,0]$. I will leave the induction argument to you.