Once you have reached perhaps 10 decimal places, calculators can make rounding errors and so on. Is it possible to build a calculator that makes none of these errors? For example, it could work out each decimal place of an irrational number – as you click a button it gives, say, 10 more digits. (Obviously, it wouldn't be able to give you "all" the digits, but could it keep on computing more indefinitely.)
Is it possible for a calculator to be completely exact?
521 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
As long as we're only talking about rational numbers, one can easily program a computer to represent them all exactly -- just store them as a pair of arbitrary-precision numerator and denominators.
This is much slower (and uses much more memory) than the usual approximate "floating-point" representation, and since floating point is usually more than precise enough for our purposes, exact rational arithmetic is only used in those special cases where there is a concrete need for it.
On the other hand, this won't help us represent irrational numbers like logarithms, square roots, sines, $\pi$, and so forth.
You can pick a particular subset of the irrationals -- for example "all numbers you can compute by starting from integers and repeatedly taking roots and/or ordinary arithmetic operations", and compute symbolically with those. Then the computer might represent the result of some operations as (and display it as) "$\frac{\sqrt{\sqrt3 + \sqrt{17}}}{2+\sqrt5}$". But this becomes even slower, and in many cases not as useful as a decimal approximation anyway.
And as you want to support more and more operations, the algorithms you need to operate symbolically on the results quickly become extremely complex. Practically the risk of bugs in those algorithms rise even faster. And the algorithms have to be invented in the first place -- we'd want a reliable way to test if two symbolic expressions are the same number, for example, but it's not obvious how to do that. We don't even know, for example, if there's any integer polynomial relation between $e$ and $\pi$, so how would we begin to compare a polynomial in $e$ and $\pi$ to "$0$"?
At some point we might decide to cut our losses and instead say we are operating on computable numbers and represent each number by a program that knows how to approximate that number by arbitrary-precision rationals. Then at least it seems to be fairly trivial to do arithmetic on such programs. Unfortunately, this would mean that it is now provably undecidable whether such a number is smaller or greater than a given rational, due to the halting problem. So we wouldn't even be able to compute the first $n$ decimal digits of the number reliably -- not much of an "exact" representation for practical purposes.
(The details of this are sensitive to exactly how we define "program that approximates the number in question". Working with "programs that produce a rational sequence that converge to our number" is different from "programs that compares our number to a rational given as input" and problems turn up at different stages in the development of them. But problems always do turn up).
There are several issues with calculations made in software: 1-Programming inaccuracy (e.g. choice of rounding/truncation point, wrong use of floating point arithmetic, etc.) 2-Limited Precision in a given language/machine 3-Compounded errors resulting from a sequence of applied operations on a number where each operation adds to the error. 4-Working with subtraction, division between numbers with very small difference.
Most importantly translating real numbers to computer representation. For example $\frac{1}{3}=0.333333...$ dos not end in mathematics, however computer memory is finite.
Here comes your question, and I think that your idea is possible to some extent. Any result you see in a calculator depends on software and hardware. The software could be programmed to control:
1-Truncation rules.
2-Rounding rules.
3-Use special formulas that could reduce calculation errors rather than apply the algebraic formula directly.
4-Enhancing the precision in calculation using special libraries for example, a 128-bit quadruple precision is designed not only for applications requiring results in higher than double precision: Wiki-Quad Precision Library
However, you can't easily control the accumulated error in calculations without having explicit algorithms that relate the desired precision to the precision required on operands and intermediate results.
As for hardware, I am sure that CPU and memory abilities are improving rapidly. Calculators using today's OS environment could take advantage of modern CPU architecture (e.g. Calculators on mobile devices). However, calculators build on special processors and used by regular people may not always offer the flexibility or power of the other ones.