I apologize if this question sounds too philosophical. I am reading about Turing's paper on computability and it got me thinking:
Why do we bother defining the mysterious "undefined things" in-between on the real number line? How important are they?
I agree that it is ok to define anything and work with any structures. But undoubtedly, some structures are more interesting than others.
Does distinguishing between computable and uncomputable real numbers generate any important results in mathematics?
(Edit: Thank you for the answers so far, I understand that we do not have to distinguish computability in the set of real numbers. I am curious what happens if we do (in any way). Has anyone investigated?)
Uncomputability as a phenomenon occurs throughout mathematics. Although theoretical computability may seem a very abstract subject, it does have concrete applications.
For example, the negative solution to Hilbert's 10th problem shows that there is no algorithm to determine if a multivariate polynomial equation over the integers has integer roots. This means that nobody is spending time trying to find such an algorithm, because we know that there is none. In this way, theoretical computability provides a "backstop" for other areas of algorithmic mathematics.
There is also a research field of "computable analysis" which studies the analysis that can be done with computable real numbers. Of course there are restrictions, but by adding additional hypotheses we can get similar versions of many theorems. For example, the intermediate value theorem continues to hold if we look at computable continuous functions and computable real numbers. There are also theorems about derivatives and integrals of computable continuous functions, such as this question on MathOverflow and this paper by Ning Zhong.