I have the definitions of computable (from my class notes) over here:
$f$ is computable if there is a computable function $\hat{f}:\Bbb N \times \Bbb N \to \Bbb Q$ such that, for all $n,r \in \Bbb N$ $$ \left |\hat{f}(n,r) - f(n) \right|\leq2^{-r}. $$
I come from a non-math background, so could someone break it down for me? I understand what $\Bbb N$ and $\Bbb Q$ are. I am just mainly confused with the expression above.
Let's consider $f(n) = \frac{7n}5$. Whatever “computable” means, this function should have that property. For example, if we put in $n=2$ the computer can print out
2.8which is the exact answer.Now let's consider $f(n) = \frac{7n}3$. Here if we put in $n=2$ the answer would look like
4.6666…and the computer can't emit an exact answer. But that is really just a quibble, right? A notion of computability that depends on these unimportant representation details, and which said that $\frac{7n}5$ was computable but $\frac{7n}3$ wasn't, would be silly. In practice we don't care that $\frac{14}5$ is a terminating decimal and $\frac{14}3$ isn't. But why not?Decimal approximations can't perfectly represent most numbers, even simple numbers like $\frac{14}3$. But it doesn't matter, because they can represent all real numbers with arbitrarily good accuracy. If we want to know the value of $\frac{14}3$ with an error of no more than $\frac1{10}$, we can do that; it's
4.6. If we want the error to be less than $\frac1{1000000}$ we can do that with decimal fractions also; it's4.6666666.We'll say that some mathematical function $f$ is computable if there is a corresponding computation, $\hat f$, which can compute $f$ to any desired degree of accuracy. You tell it how many decimal places of accuracy you need; that's $r$, and then tell it to calculate $f(n)$. It may not be able to calculate $f(n)$ exactly, but it can calculate it accurate to $r$ places, so that its output differs from the actual value of $f(n)$ by no more than $10^{-r}$. Then if you decide you need better accuracy later, you tell it a different $r$ and it can work harder and come up with a better answer.
So we say $f$ is computable if there is an approximation function $\hat f$ that computes $f$ to any desired degree of accuracy:
$$\lvert \hat f(n, r) - f(n) \rvert \le 10^{-r}.$$
$f$ is the function we 're approximating, and $\hat f$ is the computer program that approximates it: given an argument $n$ and a precision $r$, it computes an approximate value of $f(n)$ that is accurate to at least $r$ decimal places.
With this definition all the usual arithmetic functions are computable. Observe that while the output of $\hat f$ is required to be a rational number, the output of the original $f$ need not always be rational. For example, $f(n) = \sqrt n$ is computable. What happens when you try to compute $\sqrt 2$? There's no way to write out the ⸢exact⸣ answer; what would that even mean? But it doesn't matter, because all we ever really want is to compute approximate square roots of $2$ such as
1.4or1.4142or1.41421356237309504880or whatever. People have been computing square roots in that sense for thousands of years.Also with this definition, the output format of the algorithm can be left unspecified. Does the $\hat f$ print out its result in decimal digits or in binary digits? It shouldn't matter. What if it wants to print out a rational approximation like $\frac{202030}{142857}$? That should also be acceptable. The definition of computability doesn't care; it just says that $\hat f$ has to be able to emit some rational number, in some form, that is sufficiently close to the correct answer.
Instead of measuring accuracy in decimal digits, the definition you have is using binary digits, so it has $2^{-r}$ instead of $10^{-r}$. But there is no conceptual difference, and the two definitions are equivalent. (You prove this.)
If any part of this doesn't make sense, please leave a comment.