Axis intercepts of hyperplane without Gaussian Elimination?

64 Views Asked by At

I am implementing an algorithm that requires the axis intercepts of an n-dimensional hyperplane. To be robust against data issues (det=0), I would like to implement my own solution process without gaussian elimination.

For n=3, I create the parametric form, translate it into the coordinate form and extract the axis intercepts.

However, I am struggling to extend this approach to the n-dimensional space for several reasons:

  1. How to implement a method that computes the cross product with n>3?
  2. How to extract coefficients for the coordinate form for n>3?
  3. How to extract the axis intercepts from the coordinate form for n>3?

Could anyone point me in the right direction? Is the idea about parametric and coordinate form useful anyway or is there a smarter way to get the axis intercepts?

1

There are 1 best solutions below

1
On

Let’s start with the last thing first. The $i$-th axis intercept of the hyperplane with equation $a_1x_1+\cdots+a_nx_n+b=0$ is found by setting all other variables to zero, obtaining $a_ix_i+b=0$, or $x_i=-b/a_i$. Thus, once you have that equation, divide through by $-b$. The intercepts are then the reciprocals of the resulting coefficients. If that constant term is zero, then the hyperplane passes through the origin and you’re done. If any coefficient is zero, then the hyperplane is either parallel to or contains that coordinate axis. You have to take the usual care with division when $a_i$ is small compared to $b$, of course.

Now you just have to get that implicit Cartesian equation for your hyperplane. It sounds like might be considering the point-normal form of the equation, which requires finding a vector perpendicular to the hyperplane. In $\mathbb R^3$ you might do this by choosing one of the points and computing the cross product of its differences with the other two. If the three points aren’t colinear, this produces a nonzero vector perpendicular to the two difference vectors. You can do the same thing in $\mathbb R^n$ by using a generalization of the formal determinant formula for the cross product $$\mathbf x\times\mathbf y = \begin{vmatrix}\mathbf i&\mathbf j&\mathbf k\\x_1&x_2&x_3\\y_1&y_2&y_3\end{vmatrix},$$ namely $$\begin{vmatrix}\mathbf e_1&\mathbf e_2&\cdots&\mathbf e_n\\x_{11}&x_{12}&\cdots&x_{1n}\\x_{21}&x_{22}&\cdots&x_{2n}\\\vdots&\vdots&\ddots&\vdots\\x_{n-1,1}&x_{n-1,2}&\cdots&x_{n-1,n}\end{vmatrix}.$$ (This is Cramer’s rule in disguise.) If the $n-1$ points are in general position (no colinearities) then this determinant will be nonzero.

However, there’s another way available by passing to homogeneous coordinates. We can write the Cartesian equation of the hyperplane in vector form as $(\mathbf a;b)\cdot(\mathbf x;1)=0$, so the coefficients of the Cartesian equation can be found be computing a vector that’s orthogonal to the homogeneous coordinate vectors of all of the points. You could do that using the cross-product determinant from above_in fact, you get an equation of the hyperplane by replacing the basis vectors with variables and setting the determinant to zero—but another way to look at the problem is that you’re trying to find a null vector of the matrix $$\begin{bmatrix}\mathbf x_1&1\\\mathbf x_2&1\\\vdots&\vdots\\\mathbf x_n&1\end{bmatrix}.$$ Instead of the usual Gaussian elimination, a more stable and efficient method for larger $n$ is to compute the SVD and take the right-singular vector that corresponds to the smallest singular value. (In practice, the latter won’t always be equal to $0$ because of truncation errors.)