How to prove this property of a projective transformation?

453 Views Asked by At

The copy below is from this book:

  • Sophus Lie, Vorlesungen über Differentialgleichungen mit bekannten Infinitesimalen Transformationen, bearbeitet und herausgegeben von Dr. Georg Wilhelm Scheffers,
    Leipzig (1891). Availability: Amazon , bol.com, online at GDZ.
A projective transformation of the plane is represented by two equations of the form:
enter image description here
If you don't understand German, let the formulas speak and forget the rest.
The text says that the above transformation $\;(x,y) \to (x_1,y_1)\;$ is the most general that transforms a straight line into a straight line, both in the Euclidian plane. Despite of trying to understand the content of this page for about a week, I have not a clue what the purported proof is all about.
Can somebody please clarify things a bit? My knowledge about projective geometry is minimal.

2

There are 2 best solutions below

0
On BEST ANSWER

What the author is essentially saying is this: solve the $x_1=...$ and $y_1=...$ for $x$ to get $$ \big((d+e\kappa)x_1-a-b\kappa\big) x + (em+g)x_1-bm-c = 0$$ and $$ \big((d+e\kappa)y_1-h-k\kappa\big) x + (em+g)y_1-km-l = 0 $$ This can be written in matrix form $$ \begin{bmatrix} (d+e\kappa)x_1-a-b\kappa & (em+g)x_1-bm-c \\ (d+e\kappa)y_1-h-k\kappa & (em+g)y_1-km-l \end{bmatrix} \begin{bmatrix} x \\ 1 \end{bmatrix} = \begin{bmatrix}0\\0\end{bmatrix}$$ Since the vector $(x,1)$ is nonzero, regardless of $x$, that means the matrix must have zero determinant. That's what the last equation in the text says. The author argues that the $x_1y_1$ terms cancel and that only $...x_1+...y_1+...=0$ is left. That's the equation for a line.

2
On

He starts with a generic projective transformation. Using homogeneous coordinates and matrix notation, I'd rewrite his equation (4) like this:

$$\begin{pmatrix}x_1\\y_1\\1\end{pmatrix}\sim \begin{pmatrix}a&b&c\\h&k&l\\d&e&g\end{pmatrix} \begin{pmatrix}x\\y\\1\end{pmatrix}$$

He then considers that line $y=\varkappa x+m$ and applies the projective transformation to ots result, obtaining the equation for $x_1$ and $y_1$. Then comes the removal of the variable $x$. For that, let's rewrite his equations by cross-multiplying the denominator:

\begin{align*} \bigl((d+e\varkappa)x+em+g\bigr)x_1&=(a+b\varkappa)x+bm+c \\ \bigl((d+e\varkappa)x+em+g\bigr)y_1&=(h+k\varkappa)x+km+l \end{align*}

Next, rearrenge to obtain equations in $x$:

\begin{align*} \bigl((d+e\varkappa)x_1-(a+b\varkappa)\bigr)x&=(bm+c)-(em+g)x_1 \\ \bigl((d+e\varkappa)y_1-(h+k\varkappa)\bigr)x&=(km+l)-(em+g)y_1 \\ \end{align*}

Or written still differently, in vector notation:

$$ x\begin{pmatrix} (d+e\varkappa)x_1-(a+b\varkappa) \\ (d+e\varkappa)y_1-(h+k\varkappa) \end{pmatrix} = \begin{pmatrix} (bm+c)-(em+g)x_1 \\ (km+l)-(em+g)y_1 \end{pmatrix} $$

So you are asking about when one vector will be a multiple of another. You may remember that a solution can only exist if the vectors are linearily dependent, i.e. if their determinant is zero. So that's what we check.

$$ 0 = \begin{vmatrix} (d+e\varkappa)x_1-(a+b\varkappa) & (bm+c)-(em+g)x_1 \\ (d+e\varkappa)y_1-(h+k\varkappa) & (km+l)-(em+g)y_1 \end{vmatrix} \\ = \bigl((d+e\varkappa)x_1-(a+b\varkappa)\bigr)\bigl((km+l)-(em+g)y_1\bigr) \\ - \bigl((d+e\varkappa)y_1-(h+k\varkappa)\bigr)\bigl((bm+c)-(em+g)x_1\bigr) \\= \bigl((d+e\varkappa)(km+l)x_1-(a+b\varkappa)(km+l)+(a+b\varkappa)(em+g)y_1\bigr) \\ - \bigl((d+e\varkappa)(bm+c)y_1-(h+k\varkappa)(bm+c)+(h+k\varkappa)(em+g)x_1\bigr) \\ = \bigl((d+e\varkappa)(km+l)-(h+k\varkappa)(em+g)\bigr)x_1 \\ + \bigl((a+b\varkappa)(em+g)-(d+e\varkappa)(bm+c)\bigr)y_1 \\ + \bigl((h+k\varkappa)(bm+c)-(a+b\varkappa)(km+l)\bigr) $$

So he obtains the equation of a line $y_1=\varkappa_1 x_1+m_1$ with

$$\varkappa_1=-\frac{(d+e\varkappa)(km+l)-(h+k\varkappa)(em+g)} {(a+b\varkappa)(em+g)-(d+e\varkappa)(bm+c)} \\ m_1=-\frac{(h+k\varkappa)(bm+c)-(a+b\varkappa)(km+l)} {(a+b\varkappa)(em+g)-(d+e\varkappa)(bm+c)}$$

which demonstrates that the transformation maps lines to lines.