I don't really understand exactly what the difference between the weak and strong law of large numbers is. The weak law says
\begin{align*} \lim_{n \rightarrow \infty} \mathbb{P}[\mid \bar{X}_n - \mu \mid \leq \epsilon ] = 1, \end{align*}
while the strong law reads as
\begin{align*} \mathbb{P}[\lim_{n \rightarrow \infty} \bar{X}_n = \mu ] = 1 \end{align*}
Isn't this a very subtle difference? Since I can chose my $\epsilon$ arbitrarily small I can write for $n \rightarrow \infty$
\begin{align*} \mid \bar{X}_n - \mu \mid \leq \epsilon \\ - \epsilon \leq \bar{X}_n - \mu \leq \epsilon \\ \mu - \epsilon \leq \bar{X}_n \leq \mu + \epsilon \end{align*}
Which of course means that as $\epsilon \approx 0$ should be the same as $\lim_{n \rightarrow \infty} \bar{X}_n = \mu$. So, in what sense are those conditions actually "different"?
Regarding the weak law I'd like to know if these are actually the same:
\begin{align*} \lim_{n \rightarrow \infty} \mathbb{P}[\mid \bar{X}_n - \mu \mid > \epsilon] = \mathbb{P}[ \mid \lim_{n \rightarrow \infty} \bar{X}_n - \mu \mid > \epsilon] \end{align*}
I ask because the weak law always gets written like the l.h.s. but the strong law always has $\lim_{n \rightarrow \infty}$ inside the probability operator ..
The weak law of large numbers refers to convergence in probability, whereas the strong law of large numbers refers to almost sure convergence.
We say that a sequence of random variables $\{Y_n\}_{n=1}^{\infty}$ converges in probability to a random variable $Y$ if, for all $\epsilon>0$, $\lim_n P(|Y_n-Y|>\epsilon)=0$.
We say that a sequence of random variables $\{Y_n\}_{n=1}^{\infty}$ converges almost surely to a random variable $Y$ if $\lim_n Y_n(\omega)=Y(\omega)$ for almost every $\omega$, that is, $P(\{\omega:\lim_nY_n(\omega)=Y(\omega)\})=1$.
Almost sure convergence implies convergence in probability, but the converse is not true (that is why the laws of large numbers are called strong and weak respectively). To see that the converse is not true, just consider discrete random variables $Y_n$ satisfying $P(Y_n=1)=1/n$ and $P(Y_n=0)=1-1/n$. Given $0<\epsilon<1$, $P(|Y_n|\leq\epsilon)=p(Y_n=0)=1-1/n\rightarrow 1$, so $Y_n\rightarrow 0$ in probability. However, as $\sum_n P(Y_n=1)=\infty$, by Borel-Cantelly lemma we have that, for almost every $\omega$, $Y_n(\omega)=1$ for infinitely many $n$'s. The sequence $\{Y_n\}$ does not converge almost surely.
Concerning your reasoning, the fact that $\lim_nP(|\bar{X}_n-\mu|\leq\epsilon)=1$ does not imply that, for large $n$, $|\bar{X}_n-\mu|\leq\epsilon$. In my previous example, you do not have $|Y_n|\leq\epsilon$ for every large $n$, as $Y_n=1$ for infinitely many $n$'s.