I'm working through Gelfand and Fomin's book on calculus of variations. One of the book's exercises is to prove the uniqueness portion of a result called "Bernstein's theorem" on solutions to equations of the form $y'' = F(x, y, y')$. The book states the theorem thus:
If the functions $F$, $F_y$, and $F_{y'}$ are continuous at every finite point $(x, y)$ for every finite $y$, and if a constant $k > 0$ and functions $$\alpha = \alpha(x, y) \geq 0, \qquad \beta = \beta(x, y) \geq 0$$ (which are bounded in every finite region of the plane) can be found such that $$F_y(x, y, y') > k, \quad |F(x, y, y')| < \alpha y'^2 + \beta,$$ then one and only one integral curve satisfying $y'' = F(x, y, y')$ passes through any two points $(a, A)$ and $(b, B)$ with different abscissas ($a \neq b$).
(Subscripts on $F$ mean partial derivatives.) The hint for the exercise is:
Let $\Delta(x) = \varphi_2(x) - \varphi_1(x)$, where $\varphi_1(x)$ and $\varphi_2(x)$ are two solutions of $y'' = F(x, y, y')$, write an expression for $\Delta''$ and use the condition $F_y(x, y, y') > k$.
Following the hint, I got the expression $$\Delta''(x) = F(x, \varphi_2(x), \varphi'_2(x)) - F(x, \varphi_1(x), \varphi_1'(x)).$$
I thought that I could use the condition on $F_y$ to get some sort of lower bound on the magnitude of the RHS of this equation, and then try to turn that into some sort of proof that $\Delta(a)$ and $\Delta(b)$ cannot both be zero. But because $\varphi_1'(x) \neq \varphi_2'(x)$, I don't know what I can conclude about $ F(x, \varphi_2(x), \varphi'_2(x)) - F(x, \varphi_1(x), \varphi_1'(x))$ unless I also know something about $F_{y'}$ as well as $F_y$, and the theorem imposes only a very weak hypothesis, continuity, on $F_{y'}$.
Fix $x$ and use the mean value theorem applied to the function $$g(t):=F(x, t\varphi_2(x)+(1-t)\varphi_1(x), t\varphi'_2(x)+(1-t)\varphi_1'(x))$$ to find \begin{align}\Delta''(x)&=g(1)-g(0)=g'(c)1=F_y(x,f_c(x),f'_c(x))(\varphi_2(x)-\varphi_1(x))\\ &\quad+F_{y'}(x,f_c(x),f'_c(x))(\varphi_2'(x)-\varphi_1'(x)) \\:&=-G(x)\Delta(x)-H(x)\Delta'(x),\end{align} where $f_c(x):=c\varphi_2(x)+(1-c)\varphi_1(x)$, $G(x):=-F_y(x,f_c(x),f'_c(x))$, and $H(x):=-F_{y'}(x,f_c(x),f'_c(x))$. So now you have the linear equation $$\Delta''(x)+H(x)\Delta'(x)+G(x)\Delta(x)=0,$$ where you know that $\Delta(a)=0$, $\Delta(b)=0$ and $G(x)\le -k<0$. Now you have to apply the maximum principle, which says that if $H$ and $G$ are bounded, $H\le0$, and $\Delta$ achieves a nonnegative maximum value $M$ at an interior point $d$ then $\Delta(x)\equiv M$. Assume by contradiction that $\Delta>0$ somewhere in $(a,b)$, then by continuity it must have a maximum value $M>0$ at some $d\in (a,b)$ and so $\Delta(x)\equiv M$ by the maximum principle, which contradicts the fact that $\Delta(b)=0$. This shows that $\Delta\le 0$. By interchanging $\varphi_1$ with $\varphi_2$ you get that $\Delta=0$.
Are you familiar with the maximum principle? You can find it in the book of Protter and Weinberger. Theorem 3. The trick is to take the function $$z(x):=\Delta (x)+\varepsilon (e^{\alpha (x-d)}-1),$$ where $\varepsilon$ is small and $\alpha$ very large. Let me know if you want more details.