I’m thinking about two problems:
In $L^2([0,1],dx);$
If $f_n\rightarrow f$ In $L^2$, then $f_n\rightarrow f$ In measure.
If $f_n\rightarrow f$ In $L^2$, then $f_n\rightarrow f$ Almost everywhere.
So I know the definition of convergence in measure is: $\lim_{n\rightarrow \infty}\mu(\{x\in X: \lvert f_n(x)-f(x)\lvert\geq\epsilon\})=0$, $\forall\epsilon>0.$
So can I prove the first one by $\int\lvert f_n-f\lvert= \lvert \lvert f_n-f\lvert\lvert_1\leq\lvert\lvert f_n-f\lvert\lvert_2\cdot\lvert\lvert 1\lvert\lvert_2\rightarrow0$ Therefore $\lvert f_n-f\lvert\rightarrow0,$ (since it’s positive) this means it’s convergent in measure?
But what’s the difference of convergent almost everywhere? Thanks.
Your proof is incorrect because $f_n>0$ and $\int f_n\to 0$ do not imply $f_n\to 0$ a.e.
Let $f_{n,k}$ be the characteristic function of $[(k-1)/n,k/n]$, $k=1,\ldots,n$ and $n\in\mathbb{Z}_+$. Rearrange it into a usual sequence: $$g_1=f_{1,1},\ g_2=f_{2,1},\ g_3=f_{2,2},\ g_4=f_{3,1},\ g_5=f_{3,2},\ \ldots$$
It is easy to see that $g_n\to 0$ in $L^2$ but $g_n(x)\not\to 0$ for all $x$.
The first implication (convergence in norm implies convergence in measure) is correct, which follows easily from Chebyshev's inequality.