I'm far from the only person to have issues grasping the difference between convergence in probability ($\lim_{n\to\infty}[\Pr(|X_{n}-Y|>E)]=0$ for any $E>0$) and the stronger almost-sure convergence ($\Pr(\lim_{n\to\infty}[X_{n}]=Y)=1$). A common layman's explanation of the difference is as follows: in the sequence of $X_{n}$ where $n\in\mathbb{N}_{>0}$, almost-sure convergence suggests that there is a finite $N$ such that $X_{n}=Y$ for all $n\ge N$, while convergence in probability suggests that there might be a non-zero probability that $X_{n}$ deviates from $Y$ for all finite $n$, though this probability does at least converge to zero.
The explanation of the convergence in probability seems consistent with its formal statement. However, the explanation of almost-sure convergence seems to be stronger than the formal statement which deals only with the limit. On the other hand, if almost-sure convergence does NOT imply that, then I fail to see the distinction from convergence in probability.