Convergence sequence of mean implies convergence in mean / weakly consistence of subsequence of regression function estimates

266 Views Asked by At

Let $(X_n)$ be a sequence of positive random variables. Suppose that the limit of expectation of this sequence $\lim_{n\rightarrow\infty}\mathbb{E}[X_n]=0$. This imply that $(X_n)$ converges to zero in mean, i.e., that $\lim_{n\rightarrow\infty}\mathbb{E}[|X_n|]=0$. Is that correct?

My problem: some books call a sequence $\{m_n\}$ of regression function estimates "weakly consistent" for a distribution $(X,Y)$ if $\lim_{n\rightarrow\infty}\mathbb{E}$[$\int {(m_n(x)-m(x))^2 \mu(dx)}$]$=0$, where $m$ is the regression function and the integral represent the $L_2$ error, but other ones describe the weakly consistence like the convergence in probability of the $L_2$ error to $0$.

Because the $L_2$ error is a positive random variable (is it true?), is it correct to say, using the above-mentioned simple result, that the first definition of weakly consistence implies the convergence in mean to zero of the subsequence $\{m_n\}$, that in turn implies $\{m_n\}$ converge in probability (i.e. the second definition)?

1

There are 1 best solutions below

0
On BEST ANSWER

The $L_2$ error is a positive random variable, then this definition of weakly consistence leads to a convergence in mean, that is not (in general) implied by convergence almost sure.

However the usual hypothesis in the field of machine learning, like the boundedness of the functions involved or the fact that we're treating finite measures (probabilities), ensure that the almost sure convergence implies the convergence in mean.