Is this a known method for solving linear equations in 1 variable?

292 Views Asked by At

I thought of a method to solve linear equations in one variable and I want to know if this is a well-known method or not. It's an elementary method, which makes me think it has probably been known for a thousand years, yet I can't find any information about it online.

Given a linear equation with one variable, you test $x=0$ and $x=1$ as potential solutions, and if they are not, the differences between left and right sides of the equation can be used to determine the solution.

Denote the error caused by $x=0$ as $E_{0}$ and the error caused by $x=1$ as $E_{1}$. Then the solution to the equation is $\frac{E_{0}}{E_{0}-E_{1}}$

For example: Consider the equation $5(6x-3)+1=2(7-9x)+4$.

$x$ LHS RHS Error
$0$ $-14$ $18$ $32$
$1$ $16$ $0$ $-16$

The solution is $x=\frac{32}{32-(-16)}=\frac{2}{3}$.

This works by considering $(0,32)$ and $(1,-16)$ as points on a line, and then finding the $x$ intercept of that line.

This method works for special cases as well:

  • The equation is true for any number if and only if both errors are $0$
  • The equation has no solutions if and only if the errors are equal (to something other than $0$)

Thank you to anyone who can point me to any writing that already exists about this method.

1

There are 1 best solutions below

1
On

I'm pretty sure it's equivalent to some of the more standard methods of solution, but I think it's pretty cool that you came across it on your own.

If I were trying to understand what was going on, I would say that an arbitrary equation of your form can be written as $$f(x) = g(x)$$ where $f$ and $g$ are both linear. Then you are transforming it to $$f(x) - g(x) = 0$$ where $f(x) - g(x)$ has the form $ax+b$. Then your "errors" are given by $E_0 = f(0) - g(0)$, and $E_1 = f(1) - g(1)$. Take a look at how $E_0$ and $E_1$ are related to $a$ and $b$, and you might get a bit more insight.


BTW, another way of looking at your solution is that $E_0$ expresses how "wrong" you are by guessing $0$ for $x$ (otherwise knows as the $y$-intercept). And $E_1 - E_0$ tells you how much you "change the error" when you increase $x$ by $1$. So dividing your "error" by that tells you how much you need to change $x$ (from $0$) to make your "error" disappear. Notice that this also predicts that you don't have to start at $0$, and you could come up with other formulas like $$7 - E_7/(E_8 - E_7)$$ or $$4 - \frac{E_4}{(E_6 - E_4)/2}$$ where in the second one we compare the "errors" at points that are $2$ apart, so to get the "change in error per unit change in $x$" we have to divide by $2$.

You can also describe your two special cases in these terms. If the errors are the same at two different values of $x$, then you've "got no gas" - changing the value of $x$ doesn't change the error.

So if you had solutions for those values of $x$, changing $x$ doesn't change the errors, so you can change $x$ to whatever you want and you still have a solution. And if you didn't have solutions for those two values of $x$, change $x$ as much as you want and you'll never change the "error", so you can never get solutions.

(The more I use it, the more I like this way of thinking about solving these. Thanks for sharing.)