"Suppose . $ X_1, X_2,...$ is a sequence of independent and identically distributed random variables with mean $\mu$ and variance $\sigma^2$. Then for any $\epsilon \gt 0$,
$$\overline{X}_n \to \mu, \space \text{as}\space n \to \infty$$ where "$\to$" in this context is used to mean "converges in probability to "
Proof: Apply Chebyshev's inequality to $\overline{X}_n$, which has mean $\mu$ and variance $\frac{\sigma^2}{n}$:
So for $r \gt 0$, ($r \in \mathbb{R}$),
$P(|\overline{X}_n - \mu|\geq \frac{r\sigma}{\sqrt{n}})\leq \frac{1}{r^2}$
$\Leftarrow\Rightarrow \space 1 -P(|\overline{X}_n - \mu|\lt \frac{r\sigma}{\sqrt{n}}) \leq \frac{1}{r^2}$
$\Leftarrow\Rightarrow \space 1 -P(|\overline{X}_n - \mu|\lt \frac{r\sigma}{\sqrt{n}}) \leq \frac{1}{r^2}$
$\Leftarrow\Rightarrow \space P(|\overline{X}_n - \mu|\lt \frac{r\sigma}{\sqrt{n}}) \geq 1-\frac{1}{r^2}$
"It follows that, given $\delta \gt 0$, if $n> \frac{\sigma^2}{\delta\epsilon^2}$ then $1\geq P(|\overline{X}_n - \mu|\lt \epsilon) \gt 1 - \delta$ thus proving the result."
My question is to do with the last line:
what exactly is happening is here?
How does this prove the result?
(It looks a bit like a limit proof with an n, $\epsilon$ and $\delta$, where it seems that the author is finding an N such that for all n $\overline{X}_n$ tends to $\mu$...)
Alright, so this is my best shot so far. Recall the definition of convergence in probability:
$$\forall \epsilon>0, \underset{n \rightarrow \infty}{lim}P(|\overline{X}_n - X|< \epsilon) = 1$$
Also note the fact you presented above. With this fact we can see for some fixed $\delta>0$, where $n > \frac{\sigma^2}{\delta \epsilon^2}$.
$$ P \left (|\overline{X}_n - X|< \frac{r \sigma}{\sqrt{n}} \right ) < P \left (|\overline{X}_n - X|<\frac{r\sigma}{\sqrt{\frac{\sigma^2}{\delta \epsilon^2}}} \right )=P \left (|\overline{X}_n - X|< r\sqrt{\delta} \epsilon \right ) $$
Since $\delta$ and $r$ are arbitrary and fixed. We can say let $\delta = \frac{1}{r^2}$. Then, because of Chebyshev's inequality, we have
$$1- \frac{1}{\epsilon^2} \leq P \left (|\overline{X}_n - X|< \epsilon \right ) \leq 1$$
Which is equivalent to,
$$1 \geq P \left (|\overline{X}_n - X|< \epsilon \right ) \geq 1- \frac{1}{\epsilon^2}$$
Now $\epsilon$ is just any positive real number. So we can always rewrite $1/\sqrt{\epsilon_0} = \epsilon$ and get,
$$1 \geq P \left (|\overline{X}_n - X|< 1/\sqrt{\epsilon_0} \right ) \geq 1- \epsilon_0$$.
I am not sure if this is what you had in mind. Let me know if I there is an assumption in your prompt. Note that there are other ways to proof WLLN which I think are easier in the long run.