What exactly does it mean to say that a real number $x$ is computable? I can think of two reasonable definitions but I am not sure whether or not they are equivalent:
1) There is an algorithm which given $n$ outputs the $n^{\text{th}}$ bit of the binary representation of $x$.
2) There is an algorithm which given $n$ outputs a rational number $q$ with $\vert{q-x}\vert \le \frac{1}{n}$.
The troublesome case is the following: Let $z$ be the reciprocal of the length of the shortest proof of falsehood in ZFC if ZFC is inconsistent, else $0$ if ZFC is consistent, and let $x = 1 - z$. So either $x=1$ or $x = 0.1111...0...$. Seemingly the latter definition is satisfied but not the former. In the latter case, the program can spend some time hunting for a contradiction in ZFC and then output a sufficiently precise approximation whether or not a contradiction is found. In the former case, we cannot even determine the first bit, unless ZFC is actually inconsistent. Is this reasoning that the definitions are not equivalent correct? If so, what definition (perhaps neither) is standard, and how can the argument be stated more clearly? If the definitions are in fact equivalent, why?
The number you describe is a rational. Thus it is certainly computable, by either of your definitions. The fact that we don't know which computation outputs approximations to $x$ is not relevant.