While thinking about arbitrary-precision arithmetic, I had the following thought:
Definition 0. An implementation of a real number is a method $r$ of turning any $q \in \mathbb{Q}_{>0}$ into a pair of rational numbers $rq \in \mathbb{Q}^2$ such that some axioms hold, including the following:
$(rq)_0 \leq (rq)_1$
$(rq)_1 - (rq)_0 \leq q$
Some other axioms ensuring that exactly one entity that we would usually think of as a real number is pinned down in this way.
Definition 1. A real number is an equivalence class of implementations, modulo an appropriate notion of equivalence.
Definition 2. A real number $x$ is computable iff it has an element $r : \mathbb{Q}_{>0} \rightarrow \mathbb{Q}^2$ that's computable.
I think that this viewpoint (and similar approaches) could have substantial pedagogical advantages over the usual definition by Dedekind cuts. One advantage is that our definition of the phrase 'real number' only needs to be tinkered with slightly before we can define the phrase 'computable real number.' Another advantage is that it suggests at least one approach to implementing arbitrary-precision arithmetic. Long story short, I'd like to see how this pans out in practice.
Question. Do any sources define real numbers and computable real numbers almost simultaneously, and then proceed to develop the theory of each of them more-or-less side-by-side, preferably at a level that a clever undergraduate could understand?