Variance is defined as the expected square distance from the mean of a random variable: $Var\left( X\right) = E\left[{\left(X-E\left[X\right]\right)}^2\right]$. And $\sigma\left(X\right) = \sqrt{Var\left(X\right)}$.
But if $E\left(X\right)$ is the expected value of, say, the next random variable that you pick, then shouldn't the expected distance of the r.v. itself, $X$, from $E\left(X\right)$ be zero? And, as I understand, $E\left[X-E\left(X\right)\right]=0$.
So there seems to be a tension in the definition of variance/stdev as something like "expected distance from the mean", because variance and stdev are generally taken to be nonzero, whereas a different definition of expected distance from the mean equals zero.
To flesh this tension out, suppose that Player A and Player B are playing a game where Player B draws random variables from a $N\left(\mu,\sigma^2\right)$ distribution, for some fixed $\mu$ and $\sigma^2$, and Player A has to wager on what the absolute distance from $\mu$ will be of $X$, the next r.v. to be picked. If Player A is close in their guess, they win money; if they are far off, B wins money. Suppose Player A is choosing between three different strategies for playing the game:
- Always wager that the distance will be $0$
- Always wager that it will be $\sigma^2$
- Always wager that it will be $\sigma$
Under which strategy would Player A make the most money?