proof that $[0,1]$ is compact

992 Views Asked by At

I tried coming up with a proof of compactness of $[0,1]$ in $\mathbb{R}$ and thought of the following method. please let me know if it is correct or how it could be made more correct.

For any open cover of $[0,1]$ there exists an $\mathbb{\epsilon}$ such that $[0,\mathbb{\epsilon})$ is contained in one open set of the cover. Using the Least upper bound property of reals, there exists $l_1$ such that $[0,l_1)$ is contained in one open set $U_1$ and for any $l_1<m, [0,m) $ is not contained in one open set.

Now there exists some $l_2$ such that $[l_1,l_2)$ is contained in one open set $U_2$ and for any $l_2<m, [l_2,m) $ is not contained in one open set.

this way one forms an increasing sequence of real numbers ${(0,l_1,l_2..)}$ this sequence must converge to some $l$.

if $l\ne1$, pick an $\mathbb{\epsilon}$ ball around $l,$ such that $ (l-\mathbb{\epsilon}, l+\mathbb{\epsilon})$ is covered in one open set. since $l$ was the limit point of sequence ${(0,l_1,l_2..)}$, $ (l-\mathbb{\epsilon}, l+\mathbb{\epsilon})$ contains all but finitely many $l_n$'s.

let $l_m$ be the first entry in the sequence ${(0,l_1,l_2..)}$ which belongs to $ (l-\mathbb{\epsilon}, l+\mathbb{\epsilon})$,

form a new increasing sequence $(0,l_1,l_2....l_m,l..)$ similar to the previous method. This way one gets a increasing sequence not bounded by any $l_n<1$.

So there exists an increasing sequence of $(0,l_1,l_2...)$ getting arbitarily close to $1$ and their corresponding open set sequence,$(U_1,U_2,...)$ .

since there exists one open set containing $ (1-\mathbb{\epsilon}, 1]$. so all but finitely many $l_n$'s are contained in that open set. So finitely many open sets $(U_1,U_2...U_n)$ along with the open set covering $(1-\mathbb{\epsilon}, 1]$ cover $[0,1]$

2

There are 2 best solutions below

3
On BEST ANSWER

A proof along your lines is possible, but you have to be more greedy when choosing the $U_k$. Your algorithm could stop short long before the right end is reached, and you would have to restart with no guarantee of success.

We are given a family ${\cal U}$ of open sets $U\subset{\mathbb R}$ that together cover the interval $[0,1]$. Put $x_0:=0$ and choose recursively points $x_k\in\>]0,1]$ as follows:

Assume that $x_0$,$x_1$, $\ldots$, $x_m$ have been choosen.

  • If $x_m=1$, stop.
  • If $x_m<1$, put $$x_{m+1}:=\sup\bigl\{x\leq 1\bigm| \exists\, U\in{\cal U}: \ [x_m,x]\subset U\bigr\}>x_m\ .$$ Since there is an open $U\in{\cal U}$ with $x_m\in U$ we can be sure that $x_{m+1}>x_m$.

I claim that this process will stop at a finite $m$. If not, consider the point $\xi:=\lim_{m\to\infty} x_m\leq1$. There is an open $U\in{\cal U}$ covering $\xi$ and therewith a point $x_m<\xi$. The inequalities $x_m<x_{m+1}<\xi$ then violate the choice of $x_{m+1}$.

We therefore may assume $x_m=1$ for some $m\geq1$. There is an $U_m\in{\cal U}$ covering $x_m=1$ and therewith an interval $J:=\>]1-\delta,1]$, $\>\delta>0$. By definition of $x_m$ we then can find an $U_{m-1}\in{\cal U}$ covering $[x_{m-1},x]$ for an $x\in J$. This $U_{m-1}$ will also cover an interval $J':=\>]x_{m-1}-\delta,x_{m-1}]$, $\>\delta>0$. By definition of $x_{m-1}$ we then can find an $U_{m-2}$, such that $\ldots$, etcetera. Proceeding in this way we obtain a finite sequence of open sets $U_k\in{\cal U}$ $\>(m\geq k\geq0)$ which together cover the interval $[0,1]$.

5
On

When you say

form a new increasing sequence $(0,l_1,l_2,\ldots,l_m,\ldots)$ similar to the previous method.

you're sweeping quite a deal under the carpet. What if it keeps going wrong and you habe to keep backtracking? In general arguments that simply state "continue this process to infinity" are likely to contain hidden subtleties, so I would require the details to be quite a bit more explicit before I would feel convinced by such a proof.

I think your approach can actually be made to work, but doing so rigorously would require you to speak about transfinite ordinals. Not something one would want to do for an elementary real-analysis exercise.

Instead it would to be simpler to cut through all of the piecewise step-by-step fog and jump directly to the end. Define

$$ B = \{ x\in[0,1] : [0,x] \text{ is covered by some finite subcover}\} $$ then set $b=\sup B$ and argue that

  1. $b\in B$
  2. $b$ cannot be less than $1$.