I'm looking at the proof of
$$E[X] = E[E[X\mid Y]]$$
But I'm having trouble to get why does, for example if we take X,Y discrete random variables, we have that
$$E[E[X\mid Y]]= \sum_{y\mid P(Y=y)>0}E[X\mid Y=y] \cdot P(Y =y)$$
I know that
$E[X\mid Y]$ can be defined as a random variable $E[X\mid Y=y](\omega)$ if $ Y(\omega) = y$
but from that I lose a bit of intuition.
Does it mean that if we want the expected value of $E[X\mid Y]$ knowing that its a random variable where the variable is $y \in \operatorname{Im}(Y)$ then, we must simply all the "probability space of the expected value of $X$ knowing $Y = y$?
Basically, if someone has a good intuitive explanation I'd be so happy!
Also, it is written that
$$\sum_{y\mid P(Y=y)>0}E[X\mid Y=y]\cdot P(Y =y) = \sum_{y\mid P(Y=y)>0} \frac{E[X\cdot \mathbb{1}_{(Y=y)}]}{P(Y=y)} P(Y=y)$$
(where $1$ is the indicating function)
$$= \sum_{y\mid P(Y=y)>0}E[X\mid\mathbb{1}_{(Y=y)}] = E[X\mid\mathbb{1}_{(Y=y)}] = E[X]$$
and i'm also having trouble following that part.
Thank you all for everything!
For the first equation:
$$E[E[X\mid Y]]= \sum_{y\mid P(Y=y)>0}E[X\mid Y=y] \cdot P(Y =y)$$
It helps to think of $E[X\mid Y]$ as a function $f(Y)$, which is a random variable (see here). Then, as for all expected values, you iterate over all the possible values of the random variable multiplying its probability:
$$E[f(Y)]= \sum_{y \mid P(Y=y)>0} f(y) \cdot P(Y =y)$$
For your doubt on your last equations, I will write the proof in another way. Maybe it helps you.
$$E[E[X\mid Y]] = \sum_{y \mid P(Y=y)>0} E[X\mid Y = y] \cdot P(Y = y) = \sum_{y \mid P(Y=y)>0} ( \sum_{x \in X} x P(X = x \mid Y = y))\cdot P(Y = y)$$
$$= \sum_{y\mid P(Y=y)>0} (\sum_{x \in X} x \dfrac{P(X = x; Y = y)}{P(Y = y)})\cdot P(Y = y) = \sum_{x \in X} x \sum_{y \mid P(Y=y)>0} P(X = x; Y = y)$$
$$ = \sum_{x \in X} x P(X = x) = E[X]$$