Interval in which roots lie.

1k Views Asked by At

For a quadratic equation, we have several conditions from which we can determine the interval in which the roots lie.

eg: If exactly one of the roots of a general quadratic equation lies in the interval $(k1,k2)$, it can be shown that $f(k1).f(k2) < 0 $.

My question is: Can these same conditions be applied for, say, a cubic equation? Or a biquadratic equation?

1

There are 1 best solutions below

0
On BEST ANSWER

You have to be careful about saying that $f(k_1) \cdot f(k_2) < 0$ whenever the root of a quadratic equation lies in $(k_1, k_2)$. For instance, the quadratic $y = (x-2)^2$ has a root on (1, 3), but $f(1)\cdot f(3) > 0$.

Therefore, if $f(k_1) < 0 < f(k_2)$ or $f(k_2) < 0 < f(k_1)$ there is guaranteed at least one root in the interval, using the Intermediate Value Theorem as mentioned in the other answer. However, you cannot assume that if the signs are the same there is not a root on that interval.

We can often use the IVT to find zeros of higher-order polynomials. For instance, you could make a table of values and then narrow down the intervals in which the roots lie depending on where the signs change. However, this does not help you find roots with a multiplicity of 2 (such as in the quadratic example I gave above).