Does there exist a second degree curve with exactly two solutions?

1.5k Views Asked by At

I was reading a book called "Geometry of conics" where they said:

A curve of second degree is called degenerate iff it is a product of two linear factors or if it represents a single point (for egs. $x^2+3y^2=0$).

Here a curve of second degree means a set of points whose coordinates satisfy an equation of form $ax^2+by^2+cxy+dx+ey+f=0$ in some (and hence, all) Cartesian coordinate system

So I started to wonder if there was such a second degree curve which contains exactly two points?

The importance is that any non degenerate can be written in the standard form of ellipse, parabola or hyperbola.

There are certainly non trivial fourth degree curves which contain exactly four points, for example $(x-1)^2x^2 + (y-1)^2y^2=0$

I have been thinking for a while but I can not think of either an example neither a way to prove it is not possible.

So my question is basically, Does there exist an equation of form $ax^2+by^2+cxy+dx+ey+f=0$ with which has exactly two solutions $(x,y) \in \mathbb{R}^2$

6

There are 6 best solutions below

0
On BEST ANSWER

We can do a change of coordinates to simplify the equation. This follows the development in Friedberg, Insel, and Spence's Linear Algebra, 4th Edition, Section 6.5.

Consider the quadratic equation $$ax^2 + 2bxy + cy^2 + dx + cy + f = 0\tag{1}$$ with at least one of $a$, $b$, and $c$ nonzero. The associated quadratci form of $(1)$ is $$ax^2 +2bxy+cy^2.\tag{2}$$ Let $$A = \left(\begin{array}{cc} a&b\\b&c\end{array}\right),\qquad \text{and}\qquad X = \left(\begin{array}{c}x\\y\end{array}\right).$$ Then $(2)$ can be rewirtten as $X^TAX$.

Because $A$ is symmetric, it is orthogonally diagonalizable, so there exists an orthogonal matrix $P$ and a diagonal matrix $D$ with real diagonal entries $\lambda_1$ and $\lambda_2$ such that $P^tAP=D$. Let $$X' = \left(\begin{array}{c}x'\\y'\end{array}\right) = P^tX.$$ Then $X=PX$ (since $P$ is orthogonal, so $P^t=P^{-1}$.

The transformation $(x,y)\mapsto (x',y')$ eliminates the $xy$ term, and gives $$X^tAX = (PX')^tA(PX') = (X')^T(P^tAP)X' = (X')^tDX' = \lambda_1(x')^2 + \lambda_2(y')^2.$$ Performing this transformation, which amounts to a rotation about the origin, we end up with a quadratic equation of the form $$\lambda_1 (x')^2 + \lambda_2(y')^2 + Dx' + Cy' + F = 0\tag{3}$$ and because $A$ is not the zero matrix, at least one of $\lambda_1$ and $\lambda_2$ is nonzero.

If neither one is zero, then we can perform a translation and get an equation of the form $$rx^2 + sy^2 = t, \qquad r,s\neq 0.$$ Such an equation has either no solutions (if $t\lt 0$), exactly one solution (if $t=0$), or infinitely many solutions (if $t\gt 0$).

If exactly one of $\lambda_1$ and $\lambda_2$ is zero, then exchanging the roles of $x$ and $y$ if necessary and performing a translation we may reduce to the form $$rx^2 + sy = t,\qquad r\neq 0.$$ If $s\neq 0$, this has infinitely many solutions. If $s=0$ and $t\geq 0$, this has infinitely many solutions. If $s=0$ and $t\lt 0$, then this has no solutions.

3
On

No, no such curve vanishes on exactly two points.

The only proof I know runs via algebraic geometry.

Call your conic $$P(x,y)=ax^2+bxy+cy^2+dx+ey+f\text{;}$$ let $V$ be any two-point set; and consider the space $\mathscr{O}(V)$ of polynomial functions on $V$. The space $\mathscr{O}(V)$ is a $2$-dimensional vector space: if $V=\{\vec{u},\vec{v}\}$, $f_1=\{\vec{u}\mapsto1,\vec{v}\mapsto0\}$, and $f_2=\{\vec{u}\mapsto0,\vec{v}\mapsto1\}$ then any $g\in\mathscr{O}(V)$ has a unique representation $g=\alpha_1f_1+\alpha_2f_2$.

On the other hand, there is a linear map $\phi$ from $\mathbb{R}[x,y]$ (the space of $2$-variable polynomials with real coefficients) to $\mathscr{O}(V)$. Namely, $\phi$ is just evaluating at $V$.

The kernel of $\phi$ is $\ker{\phi}=\{f:\phi(f)=0\}$, the vector space of functions that vanish on $V$. That space is not quite what you're looking for, because you want functions that only vanish on $V$. Nevertheless, we might hope that $$\ker{\phi}=\{p(x,y)P(x,y):p(x,y)\in\mathbb{R}[x,y]\}$$ Unfortunately, this turns out not to be.

Note that $1$, $x$, $x^2$, $y$, and $y^2$ are all linearly independent in $\mathbb{R}[x,y]$. Since $\mathscr{O}(V)$ is $2$-dimensional, any three elements of $\{\phi(1),\phi(x),\phi(x)^2,\phi(y),\phi(y)^2\}$ must satisfy a dependence relation. In particular, suppose \begin{gather} 0=\alpha_1\phi(1)+\alpha_2\phi(x)+\alpha_3\phi(x)^2 \tag{1} \\ 0=\beta_1\phi(1)+\beta_2\phi(y)+\beta_3\phi(y)^2 \tag{2} \\ 0=\gamma_1\phi(1)+\gamma_2\phi(x)+\gamma_3\phi(y) \tag{3} \end{gather} where not all of $\{\alpha_j\}_j$ are $0$, and likewise for $\beta$ and $\gamma$.

It turns out that every dependence relation (with $\phi(1)$ replaced by $1$ and $\phi(y)$ replaced by $y$) must have a nontrivial GCD with $P(x,y)$. For an example, suppose that $\gamma_2=0$ but $\gamma_3\neq0$. Then (3) determines $\phi(y)$; in particular all points of $V$ must have $y$-coordinate $-\frac{\gamma_1}{\gamma_3}$. By polynomial long division, $$P(x,y)=(\gamma_1+\gamma_3y)q(x,y)$$ for some $q(x,y)\in\mathbb{R}[x,y]$; and $\gamma_1+\gamma_3y$ is the nontrivial GCD.

But $P(x,y)$ is second-order. First, suppose (1) is genuinely second-order: $\alpha_3\neq0$, and the resulting quadratic does not factor. Then $$P(x,y)=c\cdot(\alpha_1\phi(1)+\alpha_2\phi(x)+\alpha_3\phi(x)^2)$$ for some constant $c$; since (2) does not divide (1), (2) must in fact divide $c$. Thus $\beta_2=\beta_3=0$; since not all $\{\beta_j\}_j$ are $0$, we can conclude that $\phi(1)=0$. But then any $\mathscr{O}(V)$ has dimension $0$, impossibly.

Likewise, (2) cannot be genuinely second-order.

In other words, (1-2) must must split into linear factors. But any two linear factors are either equal or coprime, and so we have too many linear factors; if neither (1) nor (2) has a double root, then $P(x,y)$ must (impossibly) have at least four linear factors.

If only one of (1-2) has a double root, then $P(x,y)$ has at least three linear factors, which is still impossible.

Thus (1-2) are of the form \begin{gather*} 0=(\tilde{\alpha}_1\phi(1)+\tilde{\alpha}_2\phi(x))^2 \\ 0=(\tilde{\beta}_1\phi(1)+\tilde{\beta}_2\phi(y))^2 \\ \end{gather*} These equations determine the values of $x$ and $y$ on $V$; but there is only one point with a specific pair of $x$- and $y$-coordinates.

4
On

I think I found a simple way to prove this, although it seems intuitive, I am not sure if it is rigorous enough.

It is quite clear that we can reduce the general case to an equation of the form:

$x^2+ay^2=b$.

The way to do this is:

Step $1$: Rotate the coordinate system by $\theta$. In the new coordinate system say $(X,Y)$; we have $(X,Y)=(x \cos\theta + y \sin\theta, -x \sin\theta + y \cos\theta)$ (Or something very similar). Then by solving $(a-b)\sin2\theta + c\cos2\theta = 0$ we can remove the $xy$ term.

Step $2$: Shift the origin to get rid of the $x$ and $y$ terms

Step $3$: Scale the whole equation to make the coefficient of $x^2=1$.

So we want to prove that the equation: $x^2+ay^2=b$ can not have

Now it seems quite easy to prove that this equation can not have exactly $2$ solutions. Or infact that in any non trivial case (ie. $b$ is negative or zero) this equation has infinite solutions.

Indeed if $y_1$ and $y_2$ are two positive real number solutions to the equation (ie. they give a valid solution $(x,y)$, then any number $y_0 \in (y_1,y_2)$ also gives a solution.

And then you do some annoying trivial cases like what if $y_1=y_2$, well then just take and positive $y_0 < y_1$ for a solution as $b-ay_0^2> b-ay_1^2=x^2>0$ and so on...

EDIT: As has been pointed out below by Arturo (thanks!); we can not always perform step 1 and step 2. For example, if $a=b$ we can not perform step 1, and if $b=0$ we can not perform step 2. But it seems to me that there are only $3$ such cases (I can think of $a=b$, or either of them equal to $0$ only right now) as otherwise it seems we can always perform these steps. I wonder if we can solve these cases independently, as that would then give an elementary complete solution. I will update this answer again if I manage to do so.

4
On

Zero radius degenerate circles when separated:

$$(x-1)^2+y^2 =0,\;(x+1)^2+y^2 =0$$

become a single fourth degree polynomial

$$((x-1)^2+y^2) \cdot ((x+1)^2+y^2 ) =0$$

So a second degree and two solutions cannot go together.

In the CassinianOvals $ e=b=0 $ when the product of distances is zero the disjunct point circles are of the fourth degree.

0
On

Here's a proof of impossibility, using the fact that given a conic with one point, the remaining points can be parametrized using lines through the first point.

We may assume that $(0,0)$ is on our conic (in other words, it has equation $ax^2+by^2+cxy+dx+ey=0$). For every real number $m$, we consider the line $y=mx$ through $(0,0)$; this intersects the conic where $$ ax^2+bm^2x^2+cmx^2+dx+emx=x\bigl( (a + c m + b m^2) x + d+em \bigr) = 0. $$ We thus detect a second point on this line in all cases except when $a+cm+bm^2=0$ or $d+em=0$. But given $a,b,c,d,e$, each of these two expressions vanishes either for all $m$ or for only finitely many $m$; these two cases correspond to $(0,0)$ being the only point on the conic or there being infinitely many points on the conic.

(As stated this argument ignores the vertical line $x=0$, but it's easy to check that detail by hand, or to work projectively in the first place.)

0
On

I am not entirely sure what Elementary Solution you are looking for, but here is one very "Elementary" way to look at it :

ASSUME : $P(x,y)=0$ may have only 2 values say $(u,w)$ & $(v,w)$ [ two $x$ values with one $y$ value , similar argument will work in the other case too ]

When we plug in $x=u$, we should get a linear equation to solve & get $y=w$.
This means that the co-efficient of $y^2$ must be $0$.

When we plug in $x=v$, we should get the same linear equation to solve & get the same $y=w$.
This means that the co-efficient of $xy$ must be $0$.

Then we are left with $P(x,y)=Ax^2+Bx+Cy+D$ where there are 2 terms missing : $y^2$ & $xy$.
When we plug in the value of $y=w$ , we get a quadratic equation having 2 Solutions for $x=u$ & $x=v$.

This is the only way to make $P(x,y)$ have atleast 2 Solutions.

Now, when we plug in some third value of $x$, we will still get a linear equation with new constant term which means we will get a new value of $y$, which will be a third Solution of $P(x,y)=0$.

Similarly, we can take more values of $x$ to get newer linear equations in $y$ with newer constant terms to get newer values of $y$ to solve $P(x,y)=0$.

We have Proved that no $P(x,y)$ can have Exactly 2 Solutions $(u,w)$ & $(v,w)$.

One case left out is 2 Solutions like this $(U,V)$ & $(W,Z)$ :
Similar analysis shows that when $P(x,y)=0$ which satisfies these two Solutions, it will satisfy even more Solutions.