In his book "META MATH, the quest for Omega", chapter V, section "A [formal axiomatic system] cannot determine infinitely many bits of an uncomputable real", Gregory Chaitin argues that a formal axiomatic system cannot determine infinitely many bits of an uncomputable real. I do not see how the argument follows and I'm going to put it in bold the statements that I find doubtful.
The argument, in my words. For each formal axiomatic system, there is an algorithm $A$ for computably enumerating all theorems. Let $r$ be an uncomputable real. Go through the list of theorems generated by $A$ checking whether each of them determines any bit of $r$ relative to $r$'s binary expansion. If any of the theorems does determine a bit of $r$, we now know what that bit is. Suppose $A$ is able to give us infinitely many bits of $r$. This means we are only missing a finite number of them. (That is, we are almost proving that $r$ is computable, which would violate our assumption.) From that, we keep a finite table on the side in which we record which bits the formal axiomatic system misses and what their values are, showing we can compute $r$, which is a contradiction.
Question 1. Isn't it possible for us to discover infinitely many bits of the number and still be missing infinitely many bits? For example, if someone would give us all the bits at even positions $2, 4, 6, 2n$, ..., we would have an infinite number of bits, but we would still miss all the bits at the odd positions, which is also infinite.
Question 2. If the formal axiomatic system misses some bits, why would we know what their values are? Who would tell us what their values are? He totally lost me on that one.
In his words. The problem is that if we are using a Hilbert/Turing/Post FAS [formal axiomatic system], as we saw in Chapter II there has got to be an algorithm for computably enumerating all the theorems, and so if we can prove what all the bits are, then we can just go ahead and calculate $r$ bit by bit, which is impossible. To do that, you would just go through all the theorems one by one until you find the value of any particular bit of $r$. In fact, the FAS has got to fail infinitely many times to determine a bit of R*, otherwise we could just keep a little (finite) table on the side telling us which bits the FAS misses, and what their values are, and again we'd be able to compute $r$, by combining the table with the FAS, which is impossible. So our study of transcendentals now has a new ingredient, which is that we're using probabilistic methods. This is how I get a specific uncomputable real $r$, not the way that Turing originally did it, which was using Cantor's diagonal method.