Let $X_1, X_2, \dots , X_n, \dots$ be independent such that
\begin{equation*}
\mathbb{P}\left( X_{n}=-n\right) =\mathbb{P}\left( X_{n}=n\right) =\dfrac{1}{%
2n^{2}}\ \text{ and}\ \mathbb{P}\left( X_{n}=0\right) =1-\dfrac{1}{n^{2}}.
\end{equation*}
in the correction of the exercise to show the convergence in probability :
fix $\epsilon> 0 $, $P(|X_n|< \epsilon)=P(X_n=0)=1+\frac{1}{n^2} \longrightarrow 1, as \; n\longrightarrow \infty$
why they studied the convergence of the sequence $\{X_n\}$ to $0$?
I think I have understood convergence in probability, but I am having trouble putting it into practice. If you have any advice to give me, I would be very grateful.
It is indeed not always easy to find the limit in probability of a sequence. A good way to have an intuition is that if $(A_n)_{n\geqslant 1}$ is a sequence of events such that $\mathbb P(A_n)\to 0$, then for each sequence $(R_n)$, the sequence $\left(R_n\mathbf{1}_{A_n}\right)_{n\geqslant 1}$ converges to $0$ in probability.
In other words, if $X_n$ involves sets whose probability will go to $0$, these ones will play no role in the convergence in probability of $(X_n)$.
In the context of the sequence given in the opening post, $X_n=n\mathbf{1}_{X_n=n}-n\mathbf{1}_{X_n=-n}$ and the events in the indicator functions have a probability that goes to $0$.