I am very new to Probability theory and there are a few things that really confuse me.
Suppose $M$ is a finite Subset of $\mathbb{N}$ and $X:\Omega \rightarrow M$ a random variable.
My first question is: What is the Image of $X^2$ ? Because it cannot be $M$. So is it just all values in $M$ squared?
My second question is : why is $E[X^2] = \sum_{x \in M} x^2*P(X=x)$ and not $\sum_{x \in M} x^2*P(X^2=x^2)$ ? That is what I get when following the definition of $E[X]$.
The definition for E[X] for finite Random variables I use is:
$E[X]=\sum_{x \in M} x*P(X=x)$
Actually, the expression $\mathbb{E}(X^2) = \sum_{x}x^2 \mathbb{P}(X = x)$ is not directly defined to be in this way. It is indeed given by the law of the unconscious statistician, which states that,
if $X$ is a r.v. and $g:\mathbb{R} \to \mathbb{R}$, then $$\mathbb{E}(g(X)) = \sum_{x}g(x) \mathbb{P}(X = x)$$ whenever this sum is absolutely convergent.
By assigning $g$ to be $g(x) = x^2$, one can see $$\mathbb{E}(X^2) = \mathbb{E}( g(X) ) = \sum_{x}g(x) \mathbb{P}(X = x)=\sum_{x}x^2 \mathbb{P}(X = x).$$
You can find more details in 'Probability and Random Processes' by Grimmett and Stirzaker.