Consider the decimal expansion of a rational number. This will either terminate, or repeat forever a finite number of its final digits. Thus, any rational number can be expressed with a finite amount of information.
The decimal expansion of an irrational number on the other hand neither terminates nor repeats, and so cannot be expressed in a finite amount of information.
Is this related to the distinction between countable and uncountable infinities? My reasoning is that if it a single irrational number takes a countably infinite amount of information to write in decimal form, then the set of all irrational numbers must be 'doubly' infinite, that is uncountable. On the other hand, a single rational number takes a finite amount of information to write (as does a natural number), so the set of rationals is countably infinite.
A problem I can see is that it can be argued that some irrational numbers can in fact be expressed in a finite amount of information (e.g. $\sqrt{2}$). But I can't help but feel there is some link here. (Forgive me if this is all a bit imprecise, I'm not a mathematician.)
What you are suggesting is very similar to some existing ideas, including computable numbers and constructable numbers, or more generally definable numbers. In fact, "expressed in some finite amount of information" is not a terrible way to informally describe a definable number. The hard part is working out what kind of language you can use to define your numbers. There's a classic paradox about the set of positive integers that can be expressed in less than, say, twenty words, but then "the smallest positive integer that cannot be expressed in less than twenty words" clearly belongs to that set.
If we're more rigorous, and constrain ourselves to using formal languages to identify our sets of definable numbers, then we do discover the result you're discussing, but we usually approach it from the opposite direction. Which is to say - we start from the knowledge that the set of all real numbers is uncountable (thanks to, for example, Cantor's diagonalisation argument), and since we can show that any set of definable numbers is at most countably infinite, we know that the set of undefinable numbers must still be uncountably infinite because if you remove any countably infinite set from an uncountably infinite one you don't reduce its size.
This also means that it doesn't matter how hard you try to expand your language - if we find a way to construct numbers outside of any given set of definable numbers, and append that construction onto the language, we still only capture a countably infinite set and so there will be more numbers out there that can't be captured.