Covering all rationals in (0,1) by open intervals of total lenght epsilon. Is there any real in (0,1) not in the covering?

523 Views Asked by At

It seems we can cover each rational in (0,1) by a set of open intervals of decreasing lenghts whose sum is arbitrarily small. So it looks like there are a lot of holes. But on the other side, rationals are dense on (0,1)... Does it mean the original covering would cover the whole (0,1)?

Some assumption must be wrong...

Thanks in advance

Edit: following the example on pi/4. Using its decimal expansion. I am covering any finite decimal expansion on this particular number with an open interval. Each rational on the set of decimal expansions of this particular number has an interval of a given lenght. I can only imagine that the lenght of these intervals (the lenght is a function of the position inside the set of rationals) get too small leaving zeros in the decimal expansion. But this situation must happen no matter what is the base you use for the "decimal" expansion...

Edit 2: I list all rationals in interval (0,1). The first got an open interval of epsilon/2. The second of epsilon/4. The third,... How can I show the diagonal element is not covered by the open intervals linked to all rationals?

1

There are 1 best solutions below

8
On BEST ANSWER

In my opinion this is significantly harder to do "fully explicitly" than one might hope. This is an informal point of course,$^1$ but the difficulty essentially comes from the following. Suppose I have a family of open intervals whose diameters sum to $4\over 5$ and I tell you that the first of these intervals is $({1\over 4}, {3\over 4})$. Intuitively, we don't have enough remaining length to cover both of the remaining "sides" $[0,{1\over 4}]$ and $[{3\over 4}, 1]$, but we do have enough to cover either one of them. So right now we don't know which side to look on for an omitted point - we have to enumerate more elements of our putative cover until we've "used up" enough length, and this could take a while.

The intuitively-simplest diagonal constructions don't have this "delay" feature, and so getting this argument right is actually a bit messier than one might expect. We can either use a significantly messier argument, which is reasonably explicit but annoying, or go to a snappier-but-more-abstract.


My personal favorite approach is a kind of middle ground.

  • Show that no finite cover of a closed interval by open intervals can have a sum-of-diameters less than the length (in the obvious sense) of that closed interval. This is a nice induction exercise:

Clearly the result holds for covers with just one open interval. So suppose it holds for covers with $\le n$ open intervals, and $U_1,...,U_n,U_{n+1}$ are open intervals which together cover $[a,b]$. One of them contains $b$; WLOG, $b\in U_{n+1}$. Pick some $x\in U_{n+1}\cap \bigcup_{1\le i\le n}U_i$, and consider the sub-intervals $[a,x]$ and $[x,b]$ and their respective covers $\{U_1,...,U_n\}$ and $\{U_{n+1}\}$ ... Note that $(x-a)+(b-x)=b-a$.

  • Now suppose $\mathcal{U}$ is a (possibly infinite) family of open intervals. We want to show that either $\mathcal{U}$ does not cover $[0,1]$, or "$\mathcal{U}$ has lots of length" - specifically, there are some finitely many $U_1,...,U_n\in\mathcal{U}$ whose sum of diameters is $\ge 1$.

  • Say that an interval $[a,b]\subseteq [0,1]$ is good iff it is not covered by any finitely many elements of $\mathcal{U}$. By definition, if $[0,1]$ is good then we're done.

  • OK, so now assume $[0,1]$ is not good - this is the situation you're describing. Since the union of two finite sets is finite, whenever we cut a good interval in half one of the two halves must also be good, so by "repeated halving" we can recursively build a decreasing sequence of closed intervals $[a_1,b_1]\supset [a_2,b_2]\supset [a_3,b_3]\supset ...$ such that each $[a_i,b_i]$ is good and $b_i-a_i=2^{-i}$.

  • But then $\bigcap_{i\in\mathbb{N}}[a_i,b_i]$ is a singleton $\{x\}$. Can you show that $x$ is not in fact covered by $\mathcal{U}$?

If $x\in\bigcup \mathcal{U}$, then $x\in U$ for some $U\in\mathcal{U}$. But then we can find some $N$ large enough so that $[x-2^{-N}, x+2^{-N}]\subseteq U$. But then $[a_N,b_N]\subseteq U$, contradicting the goodness of $[a_N,b_N]$.

In terms of "unwinding" to get an explicit construction, the "which half of this good interval is also good?" is hiding the "wait and see when we run out of length" issue above.

As an aside, note that this argument doesn't involve the assumption that the cover $\mathcal{U}$ is countable! It's also not a proof by contradiction: we didn't start out by assuming that $\mathcal{U}$ was in fact a cover of $[0,1]$, we showed a general property of any collection of open sets (namely that either they have a finite subcollection whose sum of diameters is $\ge 1$ or they fail to cover $[0,1]$).


$^1$Actually, it's less informal than it may look! We cannot be as "simple and explicit" as one might hope: in technical language, there is a computable family of open sets whose diameters sum to $<1$ but which cover every computable point in $[0,1]$. This is an "analytical" consequence of the "combinatorial" fact that while every infinite finitely-branching tree has an infinite path, we can whip up an infinite computable subtree of $2^{<\mathbb{N}}$ (that is, a tree of finite binary sequences) with no computable path.

This is in contrast with Cantor's diagonal argument, which is in fact appropriately computable. So there really is a fundamental complexity present in "length-counting" arguments like the above which is not present in "bare" diagonalization.