I have a vector quadratic equation of the form
$\boldsymbol{x}^{T} \boldsymbol{A} \boldsymbol{x} + \boldsymbol{x}^{T} \boldsymbol{b} + c = 0$
where $\boldsymbol{A}$ is symmetric and for my particular case, $\boldsymbol{x} \in \mathbb{R}^{2}$. I know that the solution for this system (if it exists) can be found using numerical methods like Newton's method. But I am interested in evaluating only if a real solution exists or not; I don't need to actually find the roots. Is there an analytical way of determining the existence of roots?
Existence of real roots for vector quadratic equations
1.3k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
Let's do this for $x \in \mathbb R^n$, where $A$ is a real symmetric $n \times n$ matrix.
If $A$ is positive definite, the function $f(x) = x^T A x + x^T b + c$ is bounded below, with minimum value $x^T b/2 + c$ occurring at the solution of $A x = -b/2$, but unbounded above. Thus a solution of $f(x) = 0$ exists iff the minimum value $\le 0$.
Similarly, if $A$ is negative definite, $f(x)$ is bounded above, with maximum value $x^T b/2 + c$ occurring at the solution of $A x = -b/2$, and a solution exists iff the maximum value $\ge 0$.
If $A$ has both positive and negative eigenvalues, $f(x)$ is unbounded both above and below, and a solution always exists.
If $A$ is positive semidefinite but not positive definite, $f(x)$ is bounded below iff $b$ is in the column space of $A$, which is the orthogonal complement of the null space of $A$, and if so, again the minimum value is $x^T b/2 + c$ occurring at a solution of $A x = -b/2$ (by this orthogonality, it doesn't matter which solution we take).
Similarly if $A$ is negative semidefinite but not negative definite.
Of course, if $A = 0$ (which is both postive semidefinite and negative semidefinite) the function is constant if $b = 0$, otherwise it is unbounded above and below.
The classification of definiteness of $A$ does not require computing the eigenvalues, but can be done by looking at principal minors: see Wikipedia.
Since $A$ is (real) symmetric, it has real eigenvalues $\lambda_1$ and $\lambda_2$ which can be found analytically (quadratic formula). Therefore a real change of variables (change of basis to eigenvector components) can be made to put the equation into the form of a standard conic section:
$$ \lambda_1 u^2 + \lambda_2 v^2 + d_1 u + d_2 v + e_0 = 0 $$
Completing the squares in $u,v$ then tells us if the equation is that of a degenerate ellipse, asking for a sum of squares less than zero, which has no solutions in the real plane. Otherwise some real solution (a point on the conic section) will exist.
Since the OP is motivated by saving computational effort, let's suggest how best to exploit the analytic identification of solutions.
My first step would be computing $\det A = \lambda_1 \lambda_2$, at a cost in the $2\times 2$ case discussed here of two multiplications and one subtraction.
If $\det A \lt 0$, then the curve is a hyperbola and (even if degenerate) will contain real solutions (points in the plane). Consider, for illustration, this curve, which is nonempty no matter what value $c\in \mathbb{R}$ is given:
$$ u^2 - v^2 = c $$
If $\det A \gt 0$, then the matrix $A$ is either positive-definite or negative-definite (again making use of the restriction to the $2\times 2$ case, so that $\lambda_1,\lambda_2$ are both positive or both negative). We can distinguish which of these is the case by checking the sign of the entry $a_{1,1}$ of $A$. If $A$ were negative-definite, then we could easily reduce to the positive-definite case by multiplying the equation by $-1$ and treating $-A$ instead.
Now if $A$ is positive-definite, we may need to do some computation to see if a real solution exists (because potentially none does). First though we would check the sign of $c$. If $c \le 0$, then a real solution does exist by applying the intermediate value theorem. That is, as $||x||$ grows arbitrarily large, $x^T A x$ will become an arbitrarily large positive value and drive the left hand side of the equation positive (being quadratic, it grows faster than the linear term $x^T b$). By continuity, if $c \lt 0$, then a real solution (where the left hand side is exactly zero) must exist.
That leaves (for positive-definite $A$) the cases where $c \gt 0$, for which we need to do a smidge of computation to be sure a real solution exists. In an ideal implementation that work would be useful toward actually finding a real solution if one is found to exist, but for the sake of the present discussion I will suggest locating the minimum value of the left hand side:
$$ f(x) \equiv x^T A x + bx + c $$
It is well-known that the minimum value of this quadratic function occurs at the solution of the linear system:
$$ Ax_{\min} = -\frac{1}{2}b $$
Since we already have computed $\det A$, the formula for the inverse of $2\times 2$ matrix $A$ is simple enough to use to compute $x_{\min}$:
$$ x_{\min} = -\frac{1}{2} A^{-1} b $$
Now if $c \gt 0$ and $f(x_{\min}) \le 0$, then a real solution $f(x) = 0$ must exist by the same IVT argument as before. Moreover the converse is true, since if $f(x_{\min} \gt 0$, there is no solution $f(x) = 0$.
That finishes the cases where $A$ is invertible, and we turn to the possibility that $\det A = 0$. The "trivial" case $A=0$ is easy (real solutions exist unless $b=0$ and $c\neq 0$), so we will focus on the case rank$(A)=1$.
Then one of the eigenvalues is zero and the other is not. In fact trace$(A) = \lambda_1 \neq 0$ and $\lambda_2=0$, so the conic section takes the form of a parabola:
$$ \lambda_1 u^2 + d_1 u + d_2 v + e_0 = 0 $$
There will be real solutions unless $d_2 = 0$ and the discriminant of the remaining quadratic equation is negative:
$$ d_1^2 - 4 \lambda_1 e_0 < 0 $$
We can address this possibility without explicitly making the change of matrix representation implicit in the conic section formulation.
This involves a new numeric difficulty, besides that already present in trying to distinguish whether $\det A = 0$ or just very close to zero. Now we must check whether rank one $A$ can be expressed as a scalar multiple of the form:
$$ A = \alpha yy^T \text{ where } ||y|| = 1 $$
such that also $b$ is a scalar multiple of $y$:
$$ b = \beta y $$
If not, there will be a real solution (the conic section is non-degenerately parabolic), and nothing further needs to be checked.
If so, then notice the original equation becomes:
$$ \alpha x^T yy^T x + \beta x^T y + c = 0 $$
Letting $r = x^T y$, this becomes the ordinary quadratic equation:
$$ \alpha r^2 + \beta r + c = 0 $$
and the existence of a real solution is equivalent to:
$$ \beta^2 - 4\alpha c \ge 0 $$
This check can be simplified since $b\neq 0$ determines what unit vector $y$ has to be. So in practice I would implement this as two cases:
(a) If $b=0$, check whether trace$(A)$ is opposite in sign to $c$ (or $c=0$) and thus a real solution exists (otherwise not).
(b) If $b\neq 0$, just compare if $A$ is a scalar multiple $\alpha bb^T$ and proceed using $r = x^T b$ as the variable in the quadratic, which gives us the slightly simpler discriminant test for a real solution:
$$ 1 - 4 \alpha c \ge 0 $$
This covers all the possibilities for a $2\times 2$ matrix $A$. Much of this can be generalized to larger symmetric matrices, although of course such steps as computing $\det A$ will be more difficult because of the larger size.