How can I determine whether two ellipses (given using their symmetric matrices, their quadratic forms, or some similar representation) have any inner points in common? Can I determine this fact without computing any radicals (square roots, cubic roots, quartic roots)?
I just wrote a StackOverflow answer about how to test whether two ellipses (axis-aligned in that case) intersect. The approach I took was formulating the two conics as symmetric matrices $M_1$ and $M_2$, then use $\det(M_1+\lambda M_2)=0$ as a condition describing a degenerate element from the pencil of conics, i.e. a pair of lines which share all four points of intersection with the two given conics. I then argued that the touching situation corresponds to a point of intersection with algebraic multiplicity two, which in turn corredponds to the discriminant of the cubic polynomial in $\lambda$ being zero. So far so good.
But this approach left me somewhat unsatisfied. Having the touching condition expressed as a zero of some polynomial, I expect the two sides of the touching situation to have different signs. So I might start from two fully disjoint ellipses, then move them closer together, and when they touch I get a zero and then a sign change of the discriminant. But when I move them further, then depending on their shapes and orientations I might get from two distinct real points of intersection to four real points of intersection, or I might get to one ellipse fully enveloping the other. In both cases the transition would be via another touching configuration, and thus entail another sign change.
So my approach of looking at the sign of the discriminant can't distinguish between two disjoint ellipses, two ellipses with four real and distinct points of intersection, and one ellipse fully contained in the inside of another.
Is there any predicates that I can formulate in terms of the original coefficients to distinguish these situations? Can I do this without solving a cubic equation (which I would have to do for computing the points of intersection)? Can I perhaps avoid all radicals, and just look at some more signs of some more polynomials in the original coefficients to make my decision?
Also, does having the ellipses described as center, radii and rotation make this any easier? Personally I prefer the matrix representation when dealing with conics, but since the transformation from there to center and radii entails some square roots, it is conceivable that starting with this form might allow avoiding any additional roots that can't be avoided if starting from the matrix or quadratic form. And for the original StackOverflow question, a solution in terms of center and radii might even have been preferable.
Here is an answer in terms of quadratic forms.
Say we have $F$, $G$ quadratic forms of signature $(n\ (+1), (-1)\ )$ on an $n+1$-dimensional space ( in our case $n=2$).
Q: When do the closed convex cones $\{F\le 0\}$, $\{G\le 0\}$ $not$ intersect except at the origin?
A: If and only if there exist $a$, $b>0$ such that
$$a F + b G \succ 0$$ (that is the form $aF + b G$ is positive definite).
One implication is clear, the other uses the fact of separation of disjoint closed sets by hyperplanes.
Note that we may assume $a\in (0,1)$ and $b= 1-a$. Now this condition can be checked readily in concrete cases. Note that we need to look at the coefficients of the polynomial in $x$
$$\det (x I_3 + a M +(1-a) N)$$ and we want them all positive for some $a \in (0,1)$. If that is possible, the cones are separated. If that is not possible, the cones are not separated.
The case of open cones:
The sets $\{F<0\}$ and $\{G<0\}$ do not intersect if and only if there exist $a$, $b\ge 0$, $a+b=1$, with $a F + b G \succeq 0$
$\bf{Added:}$ Let's sketch a proof for $n=2$. Consider an ellipse $F=0$ in the plane ( the equation is dehomogenized), the interior is $F\le 0$). In what case is the interior contained in a half-plane $l\le 0$. The equation $l\le 0$ has to be a consequence of $F \le 0$. Therefore ( hand waving here...)
$$l = \alpha F - \Sigma S_1$$
where $\alpha > 0$, and by $\Sigma S$ we denote a sum of squares of affine forms.
Now consider the case $F\le 0$, $G\le 0$ are disjoint. Then, being convex, a line separates them. Therefore
$$l = \alpha F - \Sigma S_1 \\ -l = \beta G - \Sigma S_2 $$ and summing up we get $$\alpha F + \beta G =\Sigma S_1 + \Sigma S_2 $$
Note: We did not prove the part ellipse contained in a halfplane, so that could be just another exercise.
$\bf{Added:}$ The case $n=1$ is simpler and easier to check. This is recommended as an exercise.
Let $F(x) = x^2 + \cdots$, $G(x) = x^2 + \cdots $ quadratic polynomial such that the sets $\{F\le 0\}$ and $\{G\le 0\}$ do not intersect. Then there exist $a$, $b> 0$ such that the polynomial $a F + b G> 0$ on $\mathbb{R}$.
$\bf{Added:}$ Searched for "Farkas lemma for quadratic polynomials" and there are many results. Especially so called S-lemma which is more or less whatever was written above (perhaps for any forms $F$, $G$). It would be tempting to generalize for other functions, several of them, but that does not seem to work this way.