Let $X_1,...,X_{100}$ be a 'mathematical sample' from the $N(\mu, \sigma)$ distribution, with other words, they are independent and identically distributed random variables. Let
$$M\left(t\right)=\text{E}\left[\sum_{i=1}^{100}\textbf{1}_{\left[-t, t\right]}\left(X_i-\mu\right)\right]$$
Where $\textbf{1}$ is the indicator function.
I want to show that $M\left(t\right)$ is the expected number of values in the sample that lie between $\mu-t$ and $\mu+t$.
By using:
$$\text{E}\left[X\right]=\mu=\int_{-\infty}^\infty xf\left(x\right)dx$$
We get the following:
$$\int_{-\infty}^\infty \sum_{i=1}^{100}\textbf{1}_{\left[-t, t\right]}\left(x-\mu\right) \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{\left(x-\mu\right)^2}{2\sigma^2}} dx$$
but this seems pretty hard to solve because we have the indicator function and I am not sure if this is right.
Thanks in Advance
You're overcomplicating things. To show
it suffices to show that
To do this, note that $\mathbf{1}_{[-t, t]}(X_i - \mu)$ is the indicator function for the event "the $i$th sample lies between $\mu-t$ and $\mu+t$.