I understood the difference between convergence in probability and almost surely. And after searching online, I found multiple counterexamples to show that the WLLN does not necessarily imply the SLLN. However I am still failing to find the difference in conditions between the two as can be seen here.
The conditions, as I can see, are: independent and identically distributed random variables, with finite mean $\mu$ for both of them. Thus if conditions of one law are satisfied, the conditions of the other are also satisfied and vice versa.
What am I missing? I am not looking for counterexamples, as I have found countless of those. Thanks in advance.
There is no difference in the conditions of the two Laws at your citation.
There is a difference in the necessity of the conditions for both laws. For instance, the WLLN can be proved from weaker hypotheses. Example 0.0.1 of this relaxes the existence of a first moment, exhibiting a distribution for which the WLLN holds and the SLLN does not.
The WLLN can be proven for a distribution without first moment. It's just much easier to prove it from the same hypotheses as the SLLN. It is a standard "game" to find a set of hypotheses that make a proof easy, then relax them as much as possible, seeing if the proof, possibly with elaboration or new ideas, still goes through.
One way to see that this sort of thing is going on: Suppose you are in a universe in which the WLLN is an axiom. Is the proof of the SLLN any shorter? No (or at least, not appreciably). Now suppose you are in a universe in which the SLLN is an axiom. Is the proof of the WLLN any shorter? Yes, absolutely. This tells you that the proof of the WLLN has been "sloppy" and given up too much of the promise of its hypotheses. A typical example is that the weaker proof uses a cruder estimate than the stronger proof. One consequence is that the weaker proof still goes through with weaker hypotheses, and this is the case for the WLLN.