Single equation to graph a line in $3$-d

46 Views Asked by At

I know that to represent a line in $3$-d mathematically we need a pair of equations which represent two surfaces. Then their intersection becomes the desired line. I wonder if there is any way to represent a line mathematically through only one equation. To be more precise when we actually draw the graph in a $3$-d graphing calculator can we use that only one equation as input?

Another confusion is that imagine we have a parametric vector equation of a line in 3-d defined by $\boldsymbol{l}(t)=\boldsymbol{a}+t\boldsymbol{v}$.

Let, $\boldsymbol{l}=(x,y,z); \boldsymbol{a}=(x_1,y_1,z_1); \boldsymbol{v}=(a,b,c)$ Then, we get three equations:

$$\begin{cases} x=x_1 +at ; & \\ y=y_1 +bt ; & \\ z=z_1 +ct \end{cases} $$

Let's say the line is $2$-d. Then by substituting $t$ from first two equations we get the equation of line in $2$-d which is: $$ \frac{x-x_1}{a} = \frac{y-y_1}{b} $$ Putting it in $2$-d graphing calculator we get nice $2$-d line. But if the line is in $3$-d best we can do is we can substitute $t$ from first two and last two equations and then divide them. And then we get an equation which is: $$ \frac{(x-x_1)(z-z_1)}{ac} = \biggl(\frac{y-y_1}{b}\biggr)^2 $$

But if you put it in $3$-d graphing calculator what you'll get is not a line it will be conic. Why?

2

There are 2 best solutions below

0
On

Here's a longish answer to help you think about the relationships between dimensions of solution sets, the spaces they live in, and the number of equations. This is what you should keep in mind:

Generically, each equation that a set of variables satisfies cuts the dimension down by one.$^\dagger$

For this reason, it's nice to think in terms of codimension. In $\mathbb{R}^2$, with no equations we have the whole plane, with one equation, we get a line, and with two equations we get a single point. Let's look at this in a tabular form: \begin{array}{ccc} \text{# of eq'ns} & \mathrm{codim} & \dim & \text{solution set} \\ \hline 0 & 0 & 2 & \text{plane} \\ 1 & 1 & 1 & \text{line} \\ 2 & 2 & 0 & \text{point} \\ \end{array}

In three dimension, we can have anywhere from $0$ to $3$ equations: \begin{array}{ccc} \text{# of eq'ns} & \mathrm{codim} & \dim & \text{solution set} \\ \hline 0 & 0 & 3 & \text{space} \\ 1 & 1 & 2 & \text{plane} \\ 2 & 2 & 1 & \text{line} \\ 3 & 3 & 0 & \text{point} \end{array}

When you combine two equations in some fashion, the equation that you get will still be satisfied by the solution set. In other words, the solutions to your original system of equations will form a subset of the solutions of this new equation, but not necessarily all of it. There are many, many ways to recombine equations to form new equations, but you can't get around the number of equations needed to cut out the solution set.

What about parametrization? In $2$-d, you observe that it looks like $2$ parametric equations can be replaced by a single equation. It looks like you are saving an equation, but you're not really, as the latter equation no longer remembers a specific description of the location of each point on the line like the parametric form did.

Another way to think about this is that a parametrization $\mathbf{F}: \mathbb{R} \to \mathbb{R}^n$ puts a copy of the real line in the bigger space $\mathbb{R}^n$ in a particular position. There's a correspondence between each real value of the parameter $t \in \mathbb{R}$ and its image $\mathbf{F}(t) = \bigl(f_1(t), \dots, f_n(t)\bigr) \in \mathbb{R}^n$.

On the other hand, an equation that cuts out a solution set in a space comes from a function $f: \mathbb{R}^n \to \mathbb{R}$, where we look at a preimage $$ f^{-1}(0) = \{ (x_1, \dots, x_n) \in \mathbb{R}^n \mid f(x_1, \dots, x_n) = 0 \}. $$

This is generically codimension $1$, i.e. its dimension is $n-1$. If we want higher codimension (equivalently, lower dimension), then we need more equations, which looks like a vector-valued function $$ \mathbf{F} = (f_1, \dots, f_k): \mathbb{R}^n \to \mathbb{R}^k, $$ where we cut out the solution set again via the preimage $\mathbf{F}^{-1}(\mathbf{0})$, which unpacks as a system of equations $$ \left\{ \begin{aligned} f_1(x_1, \dots, x_n) &= 0 \\ &\;\; \vdots \\ f_k(x_1, \dots, x_n) &= 0. \end{aligned} \right. $$

The solution set will generically be of codimension $k$ in the $n$-dimensional space, i.e. a space of dimension $n-k$. Hence, in $3$-d, a single equation can only ever describe a plane, and we need two equations to describe a line.


$^\dagger$This is not true at singular points or at redundant intersections, but we can ignore this detail for now, as linear equations don't have singularities and redundant intersections generically don't occur.

0
On

Of course over $\mathbb{R}$ or $\mathbb{Q}$ there is a way. Since squares are nonnegative in a totally ordered field, we can express multiple equations $f_1=0,\dots,f_n=0$ as a single equation $f_1^2+\dots+f_n^2=0$. Using that, we can write e.g.: $$ \left(\frac{x-x_0}{a}-\frac{y-y_0}{b}\right)^2+\left(\frac{y-y_0}{b}-\frac{z-z_0}{c}\right)^2+\left(\frac{z-z_0}{c}-\frac{x-x_0}{a}\right)^2=0 $$ for a line through $(x_0,y_0,z_0)$ with direction $(a,b,c)$ (suitably modified if any of $a,b,c$ are zero).

This breaks down if you work over $\mathbb{C}$ because $\sqrt{-1}$ exists.