Can you solve two unknowns with one equation?

3.2k Views Asked by At

For example, this equation:

$\dfrac{x}{3} = \dfrac{x}{2} \cdot y$

When you graph it out, you can see that there are two lines which intersect perpendicularly. Giving the answers $x = 0$ and $y = \dfrac{2}{3}$.

If you try replacing $x$ with any number, say $18$, $y$ will always be equal to $\dfrac{2}{3}$.

Same with $y$. If you replace it with any number $x$ will always be zero. Below is a graph of the equation.

I’ve always thought that a minimum of two equations were needed to figure out 2 unknowns. Someone please explain what’s going on.

8

There are 8 best solutions below

0
On

Actually it is possible to solve two unknown with one equation. But the only way is using the properties of some functions like square roots or use one of the variable multiplying the original linear equation. For example, you can solve $y=\sqrt{x}+\sqrt{-x}$ using the property of non-negative input for getting real numbers, $x,y=0$ is the only solution for the case. And you may solve $xy=x$ as getting $y=1 \forall x\ne0$ and $y\in\Bbb C \forall =0$

Referring to your equation mentioned, the reason it can have a specific solution is because $${x\over3}={x\over2}y$$ can be rewritten in the form of: $${2\over3}({x\over2})=y({x\over2})$$ Suppose $x\ne0$, it can be reduced to $y={2\over3}$. But for $x=0$, $y$ can be any finite number, even complex.

0
On

You can rearrange your equation to give $$x\cdot (3y-2)=0$$

The product of two numbers (real or rational) can be zero only if at least one of them is zero. This is a useful property which is generalised in the concept of a domain. If you know about matrices, it is not true of matrix multiplication.

You might also note that the equation $A^2+B^2+C^2=0$ (in real numbers) requires $A=B=C=0$. Using this method (or summing numbers times their conjugates in the complex number case) one can combine any number of equations into a single equation. It doesn't give any more information than the separate equations. In the three element case we have the equation of the surface of a sphere with zero radius (also known as a point). So the equation does not give the two dimensional boundary of a three dimensional sphere as might be expected from the form of the equation.

General statements about equations must take care to deal with exceptional cases. In applying general statements it is important to exclude exceptional cases.

1
On

In this case, you are not actually determining the unknowns in the sense you are used to, you are just finding some "range" of values they can take, $x\neq0$ and $y=2/3$ and $x=0$ with $y$ as anything. This is infinitely many solutions, pretty much like $x+y=2$. But there are cases where you can find one single set of values (or finite number of values) from one equation like $y=\sqrt{x}+\sqrt{-x}$ as in @IsaacNg's answer.

In general, the rule that "you need at least $n$ equations to determine $n$ unknowns" is very limited and should not be relied heavily on. As you go more into maths, you will encounter more complicated equations than just simple lines like $x+2y=3$, which will need different techniques to solve. This rule works, at the basic level, mostly for a system of linear equations.

Hope this helps. :)

0
On

The graph of the equation does show that you're not dealing with a single solution, but with a one dimensional solution space. (You say it yourself, if $y=2/3$, $x$ could be any number in $\Bbb R$. Likewise if $x=0$, $y$ can be any number.) That this space consist of two intersecting lines doesn't change that, even though it might reveal a distinguished point. But if you want to examine this point, you're not dealing with a single equation anymore, since you add another condition.

6
On

Very rarely. For example solve $x^2 + y^2 = 0$, for $x,y \in \mathbb{R}$

0
On

In asking your question, you're probably thinking of the maxim "you need at least $n$ equations to determine $n$ unknowns". As other users have pointed out, there are some cases in which an equation involving multiple variables has a limited set of real solutions.

The reality is that with many of these equations, such as the one mentioned by @IsaacNg ($y=\sqrt{x}+\sqrt{-x}$), we have more than one piece of information at our disposal. Implicit in this question is also the domain over which solutions are expected - in most of these cases, $x,y\in\mathbb{R}$.

Without specifying the domain, these multivariate equations have an infinite set of solutions, although these solutions may sometimes be complex. Going back to @IsaacNg's example, as long as $y$ can be complex, any value of $x$ would produce a corresponding value for $y$, and so there are an infinite number of possible solutions.

When a domain is specified, such as $x\in\mathbb{R}$ and $y\in\mathbb{R}$, and an equation such as $y=\sqrt{x}+\sqrt{-x}$ is given, we end up having 3 pieces of information, and 2 unknowns, and so, in a way, fulfill the aforementioned maxim.

A contrived example which demonstrates the extent to which a specification of domain (or precondition) can reduce the set of solutions down to a finite size is to have a finite domain, for instance:$$ x\in\{1,2,3\} \ \ \ \ \ \ \ \ y\in\{4,5,6\}\\ x+y=9 $$Here, despite only having one equation and two unknowns, there is clearly one solution, that is $x=3$ and $y=6$.

In summary, although some equations may appear to contradict the statement "you need at least $n$ equations to determine $n$ unknowns", there will always need to be, whether explicitly or implicitly stated, at least $n$ pieces of information available in order to be able to determine the values of $n$ unknowns.

0
On

I want to point out something that I think has not yet been directly addressed by any of the other answers.

When you have a system of equations, like for example

$$ \left\{ \begin{aligned} 3x - y - 10 &= 0 \\ 5x + y - 22&= 0 \end{aligned} \right.$$

you are looking for values of $x$ and $y$ that satisfy both equations simultaneously. That means that you are looking for points that are on both lines, i.e. at the intersection point of the two lines. We should probably read the system aloud as "$3x - y - 10 = 0$ and $5x + y - 22 = 0$", and read the solution aloud as "$x = 4$ and $y = 2$".

However, if we multiply the the two equations together, we get a single equation $$(3x - y - 10)(5x + y - 22) = 0$$ In the above single equation, we get a solution if either of the two factors is equal to $0$. That is, this one equation is equivalent to saying "$3x - y - 10 = 0$ or $5x + y - 22 = 0$". The solution to this equation is not the intersection point of the two lines, but rather the set of all points on either line.

If you just graph the two individual equations, and then graph the single equation obtained by multiplying them together, you don't really notice the difference. That's because the solution is not just "the two lines" but rather what we do with them. In the first case, we are looking for the intersection of the two lines; in the second case, we are looking for the union of the two lines.

Now let's take a look at your original problem. You have said that the solution to the equation $\frac x3 = \frac x2 \cdot y$ is

$x = 0$ and $y = \frac 23$

But it would be more accurate to say that the solution is

$x = 0$ or $y = \frac 23$

That is, the solution is not the single point $(0, \frac 23)$; rather, every point on either of the two lines is a solution to your equation.

0
On

I’ve always thought that a minimum of two equations were needed to figure out 2 unknowns.

This is true only for equations of certain types, and even so there can be exceptions. The types of equations for which this is true are very important in practice, so it's good to have this intuition, but you also need be aware of the assumptions under which this is true, so you can recognize scenarios in which it isn't true.

Degrees of freedom and dimensions

First of all, let's assume that these are equations involving real numbers. I'll discuss other kinds of numbers briefly at the end of this answer.

You can think of each unknown in an equation as a degree of freedom along a line. The value of the unknown is the position of a slider on the line. Conversely, each equation takes away one degree of freedom. If you want a system of equations to have a single solution, you need to take away all the degrees of freedom. So with this intuition ­– and it's a good intuition but keep in mind that the truth is more complicated than that – one equation and one unknown should have a single solution; two equation and two unknowns should have a single solution; one equation and two unknowns is underconstrained and should have infinitely many solutions; two equations and one unknown is overconstrained and should have no solutions at all.

There is a mathematical concept behind this intuition: the concept of dimension. You can interpret unknowns geometrically: $n$ unknowns represent the coordinates of a point in $n$-dimensional space. For example, one unknown is the coordinates of a point on a (straight, infinite) line. Two unknowns are the coordinates of a point in a plane. Three unknowns are the coordinates of a point in three-dimensional space, and so on. Each equation restricts the set of allowed solution to a sub-space of dimension $n-1$. In general (for some definition of “in general” that depends on the type of equations), the intersection of $k$ sub-spaces of dimension $n-1$ is $n-k$. In particular, the intersection of $n$ sub-spaces of dimension $n$ is $0$, and a space of dimension $0$ is a point, or at least something close, like a finite set of points. For example, with two unknowns, the unknowns are coordinates in a plane; one equation is a curve, so two equations make two curves, and the intersection of two curves is in general a finite set of points. With three unknowns, the unknowns are coordinates in three-dimensional space; each equation is a surface, and three equations give three surfaces, and the intersection of three surfaces is in general a finite set of points.

I mentioned that the type of equation matters: the more complicated the equations can get, the more complicated the set of solutions can get. I'll just give a few examples and, in each case, I won't fully explain the theory because there's a whole branch of mathematics behind it.

Linear spaces

Amongst the simplest kinds of equations are linear equations, where the only thing you can do to an unknown are to multiply it by a constant, and to add those and constants. You can't multiply an unknown by another unknown, or multiply an unknown by itself, or use operations other than addition and multiplication. (You can actually use subtraction: it's the same thing as addition combined with a multiplication by $-1$.) After grouping terms together (for example $2x + y + 3z$ is the same thing as $5x + y$), linear equations have a simple shape that only depends on the number of unknowns. For example, a linear equation with two unknowns has the form $A x + B y = C$ where $A$, $B$ and $C$ are constants. The theory of linear equations is called linear algebra.

With linear equations, the relevant theory is that of vector spaces. The theory of dimensions for vector spaces is pretty simple: a zero-dimensional vector space is a single point, a one-dimensional vector space is a line, a two-dimensional vector space is a plane, etc. In general, in an $n$-dimensional space, the intersection of a space of dimension $n-k$ and a space of dimension $n-j$ is $n-j-k$. (This will never go below $0$ because when $j + k \gt n$, the conditions that I'm calling “in general” are impossible.) The condition that I'm calling “in general” has a relatively simple mathematical definition (but still a bit too complicated to fully explain in this answer – it's typically taught in undergraduate algebra course!), which is roughly that the two sub-spaces must not be parallel.

Let's start by looking at the case of a single equation with a single unknown. The geometric intuition is that this determines the location of a point on a line. Even so, there are three cases:

  • In general, the equation determines where the point is on the line: the equation has the form $A x = B$ and it has a single solution $x = \frac{B}{A}$.
  • But when $A = 0$, there are no solutions…
  • except that when $A = 0$ and $B = 0$, every $x$ is a solution to the equation $0 x = 0$.

Let's now be more concrete with the case of a two-dimensional space, i.e. two unknowns. Each equation is a line (an infinite, straight line). There are three cases for a system of two linear equations with two unknowns:

  • In general – and two lines picked “at random” will always be in that case – the lines intersect at a single point. The system of equations has a single solution, which is the coordinates of that point.
  • The lines can be parallel (but not identical). In that case, the system has no solution. For example, $x + y = 1 \text{ and } 2x + 2y = 1$ represents two parallel lines and has no solution. You can think of the two equations as being contradictory, although this intuition doesn't generalize well to systems with more equations and more unknown, whereas the geometrical interpretation of parallel sub-spaces does.
  • The lines can be identical. In that case, the system has infinitely many solution. For example, $x + y = 1 \text{ and } 2x + 2y = 2$ represents the same line twice, and the coordinates of all the points on that line are solutions. You can think of the two equations as being redundant, although this intuition doesn't generalize well to systems with more equations and more unknown, whereas the geometrical interpretation of sub-spaces with an intersection of nonzero dimension does.

Polynomials and varieties

Let's now consider equations where you can use constants, unknowns, addition and multiplication. We now allow multiplying unknowns together (including multiplying an unknown by itself, so you can rise an unknown to a power, e.g. $x^2$, $x^3$, …). In other words, we are now looking at polynomial. The theory of polynomial equations is called algebraic geometry.

We can still interpret unknowns as coordinates in an $n$-dimensional space. The set of solutions is then called an algebraic variety. As with linear equations, the intersection of an variety of dimension $n-k$ with a variety of dimension $n-j$ is in general a variety of dimension $n-k-j$, although now the conditions for “in general” are a lot more fiddly. There are a lot more possible shapes for algebraic varieties than there are for linear spaces.

As before, let's first look at the case of a single equation with a single unknown. The general form of such an equation is $A_0 + A_1 x + A_2 x^2 + A_3 x^3 + \ldots + A_k x^k = 0$ – every polynomial equation with one unknown can be rewritten to be in this form (by expanding, grouping terms, etc.). In other words, it's $P(x) = 0$ where $P$ is a polynomial. The largest position $k$ for which $A_k \ne 0$ is called the degree of the polynomial, and numbers $x$ such that $P(x) = 0$ are called roots of the polynomial. The special case of a polynomial of degree $1$ is a linear function. The theory of polynomials is well-understood, and apart from the special case of the null polynomial (where all coefficients are $0$, so every value of $x$ is a solution), a polynomial of degree $k$ has at most $k$ roots. (This is a consequence of the fact that if $P(r) = 0$ then $P(x)$ can be written in the form $(x - r) Q(x)$ where $Q$ is itself a polynomial, of degree one less than $P$. If $P$ has $k$ roots then $P = (x - r_1) \cdots (x - r_k) Q$ where $Q$ is a polynomial of degree $0$, i.e. $Q$ is a constant, and there can be no more roots.) Note that with polynomials, one equation and one unknown does not, in general, have a single solution, but a finite set of solutions whose maximum size depends on the polynomial.

Let's now look briefly at the case of two polynomial equations with two unknowns. Here, the geometrical interpretation is the intersection of two curves in a plane. The possible shapes depend on the degrees of the polynomials. Just to see how the complexity increases, let's look specifically at the case of polynomials of degree $2$. A polynomial equation of degree $2$ with two unknowns describes a conic section. Just that is already a bunch of different shapes: ellipses, parabolas, hyperbolas, as well as straight lines. The solutions of a system of two polynomial equations of degree two are the coordinates of points in the intersection of two conics which can be either up to $4$ points, or in “degenerate” cases more than that (e.g. two redundant equations).

Unlike the case of linear equations, it is possible to have a single solution for a single polynomial equation with many unknowns. This is due to the fact that with real numbers, a square is always non-negative. So a single polynomial equation like $P(x)^2 + Q(x)^2 = 0$ where $P$ and $Q$ are polynomials is equivalent to the system of two equations $P(x) = 0 \text{ and } Q(x) = 0$. For example, $(x - 1)^2 + (y - 2)^2 = 0$ has a single solution: $x = 1 \text{ and } y = 2$.

Curves are manifold(s)

Let's generalize again and now allow equations to use all kinds of mathematical operations. The general theory of equations involving smooth functions (or more generally differentiable functions) is called differential geometry. (Most functions involved in physics are smooth. Or at least they're smooth except where they aren't. Physical intuition about a problem usually makes it obvious where the functions aren't smooth but when doing maths this can be a major gotcha.) Solutions of systems of equations involving differentiable functions are called [differentiable manifolds]9https://en.wikipedia.org/wiki/Differentiable_manifold). This can even be generalized to more general manifolds when studying continous functions.

The concept of dimension still applies to manifolds. Again, a system of equations with $n$ unknowns determines a subset of an $n$-dimensional space. Each equation is a manifold which is in general of dimension $n-1$, and the intersection of $n$ manifolds of dimension $n-1$ is in general a manifold of dimension $0$, which is a finite set of points, or something kind of like it. But without algebraic constrains, the definition of “in general” and the possible shapes are a lot more varied. I'll just give a couple of examples with a single equation to illustrate some of that variety.

Let $f(x) = \begin{cases} e^{-1/x} & \text{if \(x \gt 0\)} \\ 0 & \text{if \(x \le 0\)} \end{cases}$. This function is infinitely differentiable, even at $0$: from the point of view of differential calculus, it's as regular as it gets. Note that $f(x) = 0$ if and only if $f \le 0$, and over positive integers $f$ is one-to-one. This kind of “smooth connection” at $0$ allows constructing smooth functions that behave in arbitrary ways on different parts of their range. For example, consider the equation $f(x) = f(y)$: its solutions are $\{(x,y) \mid (x \le 0 \text{ and } y \le 0) \text{ or } (x = y)\}$. That is, the solution space is the quarter-plane of nonpositive coordinates, plus the line $x=y$. Half the solution space has dimension $1$ and half has dimension $2$!

For a different kind of weirdness, consider the equation $\sin(x) = 0$ – we're looking at values of the sine function. The solutions are an infinite but discrete set of values: they're the numbers $x$ such that $x = \pi + 2 k \pi$ for some integer $k$. The space of solutions has dimension $0$, but it's infinite.

Other kinds of numbers

Earlier I mentioned that this answer dealt with real numbers. What about other kinds of numbers?

With Complex numbers, the theory of linear equations is pretty much identical to the one for real numbers. The theory of differential or topological manifolds is also very similar. On the other hand, the theory of algebraic varieties is different. With complex numbers, all polynomials have a root, so phenomenoms like $x^2 + y^2 = 0$ constraining both $x$ and $y$ to a single values cannot happen. A polynomial equation always reduces the dimension space by $1$, except in the “redundant” or “contradictory” cases where it reduces the space by less or makes the space of solutions empty. Unlike with real numbers, one equation cannot reduce the solution space by more than 1.

With Rational numbers, the theory of linear equations is pretty much identical to the one for real numbers. It's still the theory of vector spaces. On the other hand, once you throw more operations into the mix, the theory quickly gets wild, in the same way as with integers.

With Integers, very complex phenomena can happen. With linear equations, you run into the problem of divisibility. Stating that $3 x = y$ has a solution is equivalent to stating that $y$ is divisible by $3$. Stating that $A x + B y = 1$ has solutions (where $x$ and $y$ are the unknowns and $A$ and $B$ are two constants), is equivalent to saying that $A$ and $B$ are co-prime.

Integer solutions of polynomial equations are enough to formulate some unsolved mathematical problems. For example, the twin prime conjecture is an open problem: are there infinitely many integers $p$ such that both $p$ and $p+2$ are prime? This is equivalent to saying that $\begin{cases} x y = p \\ x' y' = p + 2 \end{cases}$ (two equations, four unknowns) has only four solutions with positive integers $x$, $y$, $x'$, $y'$. Another example: whether $x^n + y^n = z^n$ has integer solutions at all, for a given $n$ and unknowns $(x,y,z)$, is Fermat's last theorem, a.k.a. the Fermat-Wiles theorem, which remained an open problem for over three centuries and was only solved in the late 20th century by very advanced techniques.

A polynomial equation over integers is called a Diophantine equation. The theory of Diophantine equations can encode Turing completeness: it's impossible to design an algorithm that can solve all Diophantine equations. (It's not just that we don't know of such an algorithm: we can prove that no such algorithm exists.)