Necessary and sufficient conditions for the existence of a bitangent

87 Views Asked by At

For a 2D curve defined over a limited range, is the existence of a region over which the sign of the curvature is the opposite from the rest of the range both necessary and sufficient to prove the existence of a bitangent (a line tangent to the curve at two distinct points)? If so, could someone direct me to a method of proof (or an existing proof) that someone who is well-versed in math but deficient in elementary analysis could understand?

For context, I have a computation which is implicitly defined and so is solved iteratively; it resembles a quartic curve, and so can possess a bitangent. For convenience, let's call the computation f(x); the computation also depends on a parameter (which we can call s), but for a given dataset, the parameter is fixed. For some values of s, f(x) has a bitangent, while for others is does not. I need to find the threshold of s at which the bitangent disappears. My concept was to write code which scanned f(x) over all available x-values, use this scan to estimate the second derivative, and then scan that code over s values looking for the threshold at which the sign changes stop occurring. I could then, if desired, make the scan increment of x or s finer to refine the answer.

2

There are 2 best solutions below

10
On

No. A bitangent to a curve intersects it at $4$ points counting multiplicity, so Bézout's Theorem implies that an algebraic curve admitting a bitangent has degree $\geq 4$. But the curvature of a cubic plane curve can have different signs at different points (consider, e.g., any cubic $y = p(x)$, where $\deg p = 3$).

9
On

This is a subtle point. What's going on is that the dual curve (the curve of tangent lines in the space of lines in the plane) has a double point precisely when the original curve has a bitangent.

The space of lines in the plane is a two-dimensional space (basically the projective plane missing a point). Consider the tangent line to $y=f(x)$ at $x=a$: It's given by the equation $y=f(a)+f'(a)(x-a) = f'(a)x + (f(a)-af'(a))$. So let your computer plot the points $$\big(f'(a),f(a)-af'(a)\big)$$ as $a$ varies. Look for where double points in this curve appear and disappear.