Proving that extrema of cubic with 3 distinct roots always happen to fall between the roots

112 Views Asked by At

By Rolle's Theorem, it is possible to prove that between points $a$ and $b$ there is a point $c$ at which the value of $f'(c)=0$.

Now, consider a cubic polynomial function with 3 distinct real roots,

$f(x)=A(x-a)(x-b)(x-c)$

It is now necessary to prove that $x$-coordinates of extrema of $f(x)$ fall between $a, b$ and $c$. Or otherwise stated, $a<x_1<b<x_2<c$, where $x_1$ and $x_2$ are $x$-coordinates of extrema.

The problem is, however, in the fact that this should be proven without the usage of Rolle's Theorem. I can prove this fact for quadratic, I get $x=(a+b)/2$, which suggests that extremum is halfway between the roots. With cubic, however, it is much more difficult. Any ideas?

3

There are 3 best solutions below

1
On BEST ANSWER

I'm taking capital $A=1$ and then $a < b < c$ as the roots. we get the cubic $$ x^3 - \sigma_1 x^2 + \sigma_2 x - \sigma_3 \; , $$ where $$ \sigma_1 = a + b + c, $$ $$ \sigma_2 = bc + ca + ab, $$ $$ \sigma_3 = abc. $$ The first derivative is $3 x^2 - 2 \sigma_1 x + \sigma_2,$ with roots $$ \frac{\sigma_1 \pm \sqrt{\sigma_1^2 - 3 \sigma_2}}{3} \; .$$

Worth emphasizing that, with $a,b,c$ distinct, we get $$ \sigma_1^2 - 3 \sigma_2 = a^2 + b^2 + c^2 - bc - ca - ab = \frac{1}{2} \left( (b-c)^2 + (c-a)^2 + (a-b)^2 \right) $$ being strictly positive, so the two roots of $3 x^2 - 2 \sigma_1 x + \sigma_2$ are real and distinct.

The claim that $c > \frac{\sigma_1 + \sqrt{\sigma_1^2 - 3 \sigma_2}}{3}$ comes to combining the observation that $c > \frac{\sigma_1}{3}$ along with $$ \left( c - \frac{\sigma_1}{3} \right)^2 -\left( \frac{\sigma_1^2 - 3 \sigma_2}{9} \right) = \frac{1}{3} (c-a)(c-b) > 0 $$

The claim that $a < \frac{\sigma_1 - \sqrt{\sigma_1^2 - 3 \sigma_2}}{3}$ comes to combining the observation that $a < \frac{\sigma_1}{3}$ along with $$ \left( a - \frac{\sigma_1}{3} \right)^2 -\left( \frac{\sigma_1^2 - 3 \sigma_2}{9} \right) = \frac{1}{3} (c-a)(b-a) > 0 $$

The final claim is that $b$ lies between the critical points, in that the distance between $b$ and $\sigma_1/3$ is smaller than $\sqrt{\sigma_1^2 - 3 \sigma_2}/3.$ Indeed, $$ \left( b - \frac{\sigma_1}{3} \right)^2 -\left( \frac{\sigma_1^2 - 3 \sigma_2}{9} \right) = \frac{1}{3} (c-b)(a-b) < 0 $$

0
On

Translating and scaling, you can assume that $a=0$ and $b=1$. Then $$f(x)=x^3-(c+1)x^2+cx$$ $$f'(x)=3x^2-2(c+1)x+c$$

The roots of $f'$ are

$$\frac{2c+2\pm\sqrt{4c^2-4c+4}}6=\frac{c+1\pm\sqrt{c^2-c+1}}3$$

Note that the discriminant is increasing for $c>1/2$. Then, since $c>b=1$, we have $$\frac{c+1+\sqrt{c^2-c+1}}3>\frac{1+1+1}3=1$$ and $$\frac{c+1-\sqrt{c^2-c+1}}3=\frac{c+1-\sqrt{(c-\frac12)^2+\frac34}}3<\frac{c+1-(c-\frac12)}3=\frac12<1$$

0
On

Suppose $a, b, c$ with $a<b<c$ are the roots of the cubic $f(x) $. Then by Rolle's theorem the derivative $f'(x) $ vanishes at least once in $(a, b) $ and at least once in $(b, c) $. Since $f'(x) $ is a quadratic these are its only roots. This means that $f'(x) \neq 0$ if $x\notin[a, c] $. For extrema of $f$ we need derivative to vanish and hence it is clear that any extrema of $f$ must be $(a, c) $. Precisely there is one extremum in $(a, b) $ and another in $(b, c) $ and these are of opposite nature. To understand why it is so look back at the roots of derivative $f'(x) $. As noted there are two roots $p, q$ with $p\in(a, b) $ and $q\in(b, c) $ and the derivative $f'$ changes sign as it moves past these roots $p, q$. And the nature of sign change at $p$ is opposite to that at $q$.


If you want to avoid calculus altogether then you can assume without loss of generality that the cubic polynomial is $f(x) =x(x-h) (x+k) $ where $h, k$ are positive (this essentially means scaling and translation of a given cubic).

Then we can try to analyze the behavior of $f$ in intervals $$A=(-\infty, - k), B=(-k, 0),C=(0,h),D=(h,\infty)$$ The sign of $f$ in these intervals is $-, +, -, +$ respectively. We will prove that $f$ is increasing in $A$ and $D$. Let's take the interval $D$ and consider two points $a, b\in D$ such that $h<a<b$. Then we have $$\frac{f(b) - f(a)}{b-a} = a^2+b^2+ab+(k-h)(a+b)-hk=(a-h)(a+b) +b^2+kb+k(a-h)>0$$ and this proves that $f$ is increasing in $D$ and the same way we can treat the interval $A$.

Thus any extrema of $f$ must lie in either $B$ or $C$ ie between the roots. More analysis can be done to show that $f$ has a maximum in $B$ and a minimum in $C$.