I tried coming up with a proof of compactness of $[0,1]$ in $\mathbb{R}$ and thought of the following method. please let me know if it is correct or how it could be made more correct.
For any open cover of $[0,1]$ there exists an $\mathbb{\epsilon}$ such that $[0,\mathbb{\epsilon})$ is contained in one open set of the cover. Using the Least upper bound property of reals, there exists $l_1$ such that $[0,l_1)$ is contained in one open set $U_1$ and for any $l_1<m, [0,m) $ is not contained in one open set.
Now there exists some $l_2$ such that $[l_1,l_2)$ is contained in one open set $U_2$ and for any $l_2<m, [l_2,m) $ is not contained in one open set.
this way one forms an increasing sequence of real numbers ${(0,l_1,l_2..)}$ this sequence must converge to some $l$.
if $l\ne1$, pick an $\mathbb{\epsilon}$ ball around $l,$ such that $ (l-\mathbb{\epsilon}, l+\mathbb{\epsilon})$ is covered in one open set. since $l$ was the limit point of sequence ${(0,l_1,l_2..)}$, $ (l-\mathbb{\epsilon}, l+\mathbb{\epsilon})$ contains all but finitely many $l_n$'s.
let $l_m$ be the first entry in the sequence ${(0,l_1,l_2..)}$ which belongs to $ (l-\mathbb{\epsilon}, l+\mathbb{\epsilon})$,
form a new increasing sequence $(0,l_1,l_2....l_m,l..)$ similar to the previous method. This way one gets a increasing sequence not bounded by any $l_n<1$.
So there exists an increasing sequence of $(0,l_1,l_2...)$ getting arbitarily close to $1$ and their corresponding open set sequence,$(U_1,U_2,...)$ .
since there exists one open set containing $ (1-\mathbb{\epsilon}, 1]$. so all but finitely many $l_n$'s are contained in that open set. So finitely many open sets $(U_1,U_2...U_n)$ along with the open set covering $(1-\mathbb{\epsilon}, 1]$ cover $[0,1]$
A proof along your lines is possible, but you have to be more greedy when choosing the $U_k$. Your algorithm could stop short long before the right end is reached, and you would have to restart with no guarantee of success.
We are given a family ${\cal U}$ of open sets $U\subset{\mathbb R}$ that together cover the interval $[0,1]$. Put $x_0:=0$ and choose recursively points $x_k\in\>]0,1]$ as follows:
Assume that $x_0$,$x_1$, $\ldots$, $x_m$ have been choosen.
I claim that this process will stop at a finite $m$. If not, consider the point $\xi:=\lim_{m\to\infty} x_m\leq1$. There is an open $U\in{\cal U}$ covering $\xi$ and therewith a point $x_m<\xi$. The inequalities $x_m<x_{m+1}<\xi$ then violate the choice of $x_{m+1}$.
We therefore may assume $x_m=1$ for some $m\geq1$. There is an $U_m\in{\cal U}$ covering $x_m=1$ and therewith an interval $J:=\>]1-\delta,1]$, $\>\delta>0$. By definition of $x_m$ we then can find an $U_{m-1}\in{\cal U}$ covering $[x_{m-1},x]$ for an $x\in J$. This $U_{m-1}$ will also cover an interval $J':=\>]x_{m-1}-\delta,x_{m-1}]$, $\>\delta>0$. By definition of $x_{m-1}$ we then can find an $U_{m-2}$, such that $\ldots$, etcetera. Proceeding in this way we obtain a finite sequence of open sets $U_k\in{\cal U}$ $\>(m\geq k\geq0)$ which together cover the interval $[0,1]$.