Suppose $f\in C^1([0,1])$ and $f'(0)\neq 0$. For $x\in(0,1]$, let $\theta(x)$ be such that $$\int_0^x f(t)dt = f(\theta(x))x$$ Find $$\lim_{x\to 0^{+}} \frac{\theta(x)}{x}$$
I thinking about Taylor series expansion of $$F(x)=\int_0^x f(x)dx$$ but the point is the remainder term. I can't figure out which remainder term should I use. And what is the point of taking only $x\to 0^+$. Any kind of help is appreciable.
Note that $f$ is continuously differentiable in $[0,1]$ and $f'(0)\neq 0$ so that $f'$ maintains a constant sign in $[0,h] $ and hence $f$ is invertible in $[0,h]$ with inverse $g$ (say).
Next let $$F(x) =\int_{0}^{x}f(t)\,dt$$ and by definition we have $$\theta(x) =g\left (\frac{F(x)} {x} \right) $$ By Fundamental Theorem of Calculus we have $F(x) /x\to f(0)$ as $x\to 0^+$ and by continuity of $g$ this means that $\theta(x) \to g(f(0))=0$. Defining $\theta(0)=0$ our job is now to find $\theta'(0)$.
We have $$\theta'(x) =g'\left(\frac{F(x)} {x} \right) \cdot\frac{xf(x) - F(x)} {x^2}$$ Taking limits as $x\to 0^{+}$ and noting that $\theta, g$ are continuously differentiable we have $$\theta'(0)=g'(f(0))\cdot\lim_{x\to 0^{+}}\frac{xf(x)-F(x)}{x^2}=\frac{1}{f'(0)}\cdot\lim_{x\to 0^+}\frac{f(x)+xf'(x)-f(x)}{2x}=\frac{1}{2}$$ (last step uses L'Hospital's Rule).
This question reminds me of the famous result in differential calculus which deals with limiting behavior of parameter $\theta$ which appears in Taylor's Theorem.
Let's first state it as
And now to the proof of the above result. By Taylor's theorem we have $$f(a+h) =f(a) +hf'(a) +\dots+\frac{h^{n-1}}{(n-1)!}f^{(n-1)}(a)+\frac{h^n}{n!}f^{(n)} (a+\theta_n h) \tag{1}$$ and $$f(a+h) =f(a) +hf'(a) +\dots+\frac{h^{n-1}}{(n-1)!}f^{(n-1)}(a)+\frac{h^n}{n!}f^{(n)} (a)+\frac{h^{n+1}}{(n+1)!}f^{(n+1)}(a+\theta_{n+1}h) \tag{2}$$ where both $\theta_n, \theta_{n+1}$ lie in $(0,1)$. The subscript notation is used to distinguish the thetas appearing in the Taylor expansions above and the theorem mentioned above deals with $\theta_n$.
Comparing the two Taylor expansions above we get $$f^{(n)} (a+\theta_n h) =f^{(n)} (a) +\frac{hf^{(n+1)}(a+\theta_{n+1}h)}{n+1}\tag{3}$$ But using mean value theorem we have $$f^{(n)} (a+\theta_n h) =f^{(n)} (a) +\theta_n hf^{(n+1)}(a+\theta\theta_n h) \tag{4}$$ for some $\theta\in(0,1)$.
Again comparing $(3)$ and $(4)$ we get $$\theta_n=\frac{f^{(n+1)}(a+\theta_{n+1}h)}{(n+1) f^{(n+1)}(a+\theta\theta_{n+1}h)}$$ Letting $h\to 0$ we get $$\theta_n\to\frac{f^{(n+1)}(a)}{(n+1)f^{(n+1)}(a)}=\frac{1}{n+1}$$
For your question apply the theorem to the anti-derivative $F$ with $a=0,n=1$ and use symbol $x$ in place of $h$. We have $$F(x) =F(0)+xF'(\theta_1 x) $$ ie $$\int_{0}^{x}f(t)\,dt=xf(\theta_1 x) $$ so that $\theta_1=\theta(x) /x$ and by the theorem this tends to $1/(n+1)=1/2$ as $x\to 0$ (provided $F''$ is continuous in neighborhood of $0$ and $F''(0)=f'(0)\neq 0$).