I have trouble understanding the motivation of this proof (not the proof itself).
It wants to use the definition that every sequence in the compact set $Y$ has at least one convergent subsequence, to prove that if an infinite collection of open sets $V_a$ can cover $Y$, then it definitely has a finite subcover.
It defines $$r(y):=\sup\{r : B(y,r)\subseteq V_a\text{ for some }a\}$$ for each $y\in Y$, ($r$ is a real number, and $B(y,r)$ are the open balls centered at $y$ with radius $r$) and defines $r_o:= \inf\{r(y):y\in Y\}$. It then proves that $r_o=0$ or $r_o>0$ would all make the sequential compactness property being violated.
I can understant why $r_o$ does not exist, but I don't follow:
Why did Terence decide to define $r(y)$ in the first place, and why does the fact that $r_o$ does not exist imply that $V_a$ must have a finite subcover?
The proof to me is like, if $Y$ is sequential compact, then $r(y)$ and $r_o$ blablabla, then the set is not sequential compact, contradiction, so the finite subcover exists, but how does $r(y)$ and $r_o$ relates to finite subcover?
I understand that sequential compactness is equivalent to compactness, but I don't understand how this proof relates to it.
Hope I made my question clear.
I did as best as I could, it is not an easy subject to learn on your own so I tried to prepare good exercises. If you have any questions you can ask in the comments.
Let $Y$ be a compact set and $(V_a : a\in I)$ be an infinite cover.
Before we start, I believe it will be good if you try to solve these two exercises on your own.
Exercise 1: Cover $[0,1]$ by infinitely many open sets and find a subcover (just to see what's going on).
Exercise 2: Cover $[0,1]$ by infinitely many open sets $(V_n : n\in \mathbb{N})$, so that for every $m\in\mathbb{N}$, you can remove $m$ sets from $(V_n : n\in\mathbb{N})$, still having a cover. (This is not easy, but you should try anyway).
Now let me introduce a procedure, where in each step we collect a subset of $(V_a : a\in I)$, in a hope of getting a finite subcover. If you solved exercise 2, you would know that this method can not work, but we will modify it until it can only fail in a very particular case that can not happen. If you did not solve Exercise 2 (you should!), you can still continue reading as I will give an example below.
Let $J = \emptyset$, in each step of the following process I will add a set from $(V_a : a\in I)$ to $J$. Again, our hope is that this procedure will end after finitely many steps.
Take $y_0\in Y$ be arbitrary. By definition, there exists some open neighborhood $V_{a_0}$ containing $y_0$. Insert $V_{a_0}$ to $J$.
So far, $J$ covers any $y\in V_{a_0}$, including of course $y_0$. If $J$ already covers everything, stop. Otherwise, there must exists some $y_1$ that was not previously covered (namely $y_1\in Y\backslash V_{a_0}$). Now find some $V_{a_1}$ so that $y_1\in V_{a_1}$, add $V_{a_1}$ to $J$ so now $J=(V_{a_0},V_{a_1})$ and it now covers $V_{a_0}\cup V_{a_1}$. If this is everything, stop. Otherwise choose $y_2 \not \in V_{a_0}\cup V_{a_1}$ and continue this way...
In exercise $2$, you found an example where the procedure can go to infinity. Let me give you another exercise that should help understanding the good situation (when the procedure ends after finitely many steps).
Exercise 3: Take $Y=[0,1]$ and the collection $(B_y(c) : y\in Y)$ where $c$ is some positive constant and $B_y(c)$ denote the open ball of radius $c$ around $y$. Apply the above procedure and prove that it must end after finitely many steps (how many?).
If the procedure ends after finitely many steps, then this mean that the $J$ we get at the end of the process is a finite subcover. We should therefore study when this procedure never ends.
To see that this procedure might be infinite, suppose that I start with the collection $(V_a : a\in I)$ which contains all open neighborhoods of (say $[0,1]$). Of course this should be very easy to find an open subcover (we can just take $[0,1]$), but instead we are going to apply the procedure in a very stupid way, ensuring that it will fail. Being stupid, I start by covering $y_0=0$ by an open neighborhood that is half as large as what I was supposed to, namely by $B_{\frac{1}{2}}(0)$. Then I pick $y_1=\frac{1}{2}$ which is not in $B_{\frac{1}{2}}(0)$, and cover it by $B_{\frac{1}{4}}(\frac{1}{2})$. Then take $y_3=\frac{3}{4}$ and cover by $B_{\frac{1}{8}}(\frac{3}{4})$ and so on... This procedure will never end.
So we do not want to be stupid. How can we do that? Perhaps its best to choose the largest open neighborhood containing $y_0$, whatever that means (this is not precise, but we get close to that).
This answers your question about $r(y)$. Since we do not like open neighborhood, but do like open balls it make sense to define
$$r(y) = \sup \{r : B(y,r)\in V_a \text{ for some } a\}.$$
This $r(y)$ measures the maximal radius of a ball that contains $y$, yet is included in some element in our cover $(V_a:a\in I)$. Using that $r(y)$, we can now choose the largest element in the cover which contains $y$. Remark: What I said above is not precise, because there is the technicality, that since we work with a supremum, it is not guaranteed that $B(y,r(y))\subseteq V_a$, but for any $\varepsilon>0$, $B(y,r(y)-\varepsilon)$ is.
Now it is good for you to try and apply the same procedure as before, but now use the $r(y)$ to choose the elements in $J$ in a clever way. If you solved Exercise 3, you will see that if the largest of the balls have radius bounded below by $c$, then you can apply the procedure effectively. In other words, if $\inf_y r(y)>c$, for some constant $c$, then the problem is now solved.
In theory, we might still be in a bad situation similar to Exercise $2$. Namely, even though we always pick the large balls, we still end up choosing smaller and smaller balls (that is the case when $\inf r(y) = 0$). Recall the example with $[0,1]$ where we always ended up choosing balls with radiuses $\frac{1}{2^n}\rightarrow 0$. As you can see in this example, we did not end up covering everything because $1$ is not covered. It is therefore left to show, that if we are clever, and always choose the largest of balls, this situation can not happen anymore.