I was reading the answer of this question :
Convergence of measure sequences bounded by a finite measure
But I did not understand some parts in the proof of $\mu: \mathcal{A}\longrightarrow[0,\infty]$.
First: I did not understand why the $\epsilon$ in the definition of the limit is the same $\epsilon$ here "there exists some $\varepsilon>0$ such that $\mu(A)+\varepsilon=0$"?
Second: Also, I did not understand why "Thus for all $n>N$ we would have $\mu_n(A)<0$"?
Could anyone explain those parts for me, please? Also is not there an easier way of the proof?
Since OP assumes $\mu(A)<0$ for some $A$, we can always choose $\epsilon_0>0$ such that $\mu(A)+ \epsilon_0=0$ (I used $\epsilon_0$ to avoid confusion)
Since $\lim_{n \to \infty} \mu_n(A) = \mu(A)$, by definition, for any $\epsilon>0$ there exists $N \in \mathbb{N}$ s.t for all $n>N$ , $|\mu_n(A)-\mu(A)|< \epsilon$.
i.e: $$-\epsilon \leq \mu_n(A)-\mu(A)<\epsilon,$$ for all $n>N$
Since $\epsilon$ is arbitrary, this should hold for $\epsilon_0$ in (1) as well. Hence, $$-\epsilon_0 \leq \mu_n(A)-\mu(A)<\epsilon_0$$
From the rightmost inequality, we have $\mu_n(A) < \epsilon_0+ \mu(A) = 0$ for all $n>N$, which is a contradiction, since {$\mu_n$} is a sequnece of measures.