Could you please explain, how to find the equations of two straight lines using the joint equation - $ax^2+2hxy+by^2+2gx+2fy+c=0$.
To convert the given pair of straight lines into the joint equation, I would just multiply the two equations as given below:
Let $a_1x+b_1y+c_1=0$ and $a_2x+b_2y+c_2=0$ be two lines. To find the joint equation, I would just multiply them and simplify $(a_1x+b_1y+c_1)(a_2x+b_2y+c_2)=0$. But I wish to know how to do the reverse process, i.e., finding the equations of two lines from the joint equation.
If you’re clever or lucky, you can spot how to factor the equation into a product of linear terms. That’s not likely outside of artificially-constructed exercises and exam questions. If the general equation does in fact represent a pair of lines, they are the common asymptotes of the family of hyperbolas obtained by varying $f$ in the equation. Indeed, the asymptotes can be considered the degenerate member of this one-parameter family. Several methods to find these asymptotes can be found in the answers to this related question and others. In Perspectives on Projective Geometry, Richter-Gebert gives an algorithm for “splitting” a degenerate conic, which I’ll reproduce briefly here.
First, it might be good to verify that the equation does in fact represent a pair of lines. Writing the equation in matrix form as $$\mathbf x^TQ\mathbf x = \begin{bmatrix}x&y&1\end{bmatrix} \begin{bmatrix}a&h&g\\h&b&f\\g&f&c\end{bmatrix} \begin{bmatrix}x\\y\\1\end{bmatrix} = 0,$$ examine $S = \det Q \text{ and } \Delta = \det\begin{bmatrix}a&h\\h&b\end{bmatrix} = ab-h^2$: if $\Delta\lt0$, the equation represents a hyperbola, and if $S=0$, it is degenerate—a pair of lines. The matrix $Q$ is then, up to an irrelevant constant factor, a rank-two matrix of the form $lm^T+ml^T$, where $l$ and $m$ are homogeneous coordinate vectors that represent the two lines. The algorithm finds a skew-symmetric matrix $M$ such that $Q+M$ is a rank-one matrix of the form $ml^T$, from which both lines can be read directly.
It turns out that if $p$ is the intersection point of the lines and $\mathcal M_p$ its skew-symmetric “cross-product” matrix, then there is some real $\alpha$ for which the matrix $Q+\alpha\mathcal M_p$ has rank one. The intersection point $p$ is the center of the conic, which can be found using any of several standard methods. Once you have this point, form the matrix $Q+\alpha\mathcal M_p$ and find an $\alpha$ for which all of the $2\times2$ minors vanish. This will involve solving a straightforward quadratic equation in $\alpha$.
That isn’t quite the algorithm Richter-Gebert presents, but when you’re doing this calculation yourself, it’s can be more convenient than his actual algorithm:
Here $Q^{\tiny\triangle}$ is the adjugate of $Q$, i.e., the transpose of its cofactor matrix. (Since $Q$ is symmetric, this is equal to its cofactor matrix.) Applying this algorithm to the general equation, we get $$C = \begin{bmatrix}a & h-\sqrt{h^2-ab} & g+{af-gh\over\sqrt{h^2-ab}} \\ h+\sqrt{h^2-ab} & b & f-{bg-fh\over\sqrt{h^2-ab}} \\ g-{af-gh\over\sqrt{h^2-ab}} & f+{bg-fh\over\sqrt{h^2-ab}} & c\end{bmatrix}.$$ It’s a moderately interesting exercise to verify that, with the assumption that the common element is nonzero, every row/column pair of this matrix represents the same pair of lines and generates the original equation. If you try this, you’ll need to use $S=0$ to do some of the necessary simplification.
This algorithm also works when the conic is a pair of parallel lines: $p$ will be a point at infinity in that case. If the conic is a double line, then $Q$ is already a rank-one matrix of the form $mm^T$, which, if you didn’t spot immediately, you will discover after computing $Q^{\tiny\triangle}$: all of the cofactors of a rank-one matrix vanish, so its adjugate is the zero matrix.
Having said all that, when working this by hand, I find it easiest to compute the conic’s center—the intersection point of the lines—and get the lines’ direction vectors by finding nonzero solutions of $ax^2+2hxy+by^2=0$ (which is equivalent to finding the intersections of the hyperbola with the line at infinity). The latter is usually a matter of treating the above equation as a quadratic in one of the variables and setting the other variable to some convenient value.