Let $X$ be a random variable and $A$ an event (outcome), $P(A)>0$. How do we define $E(X|A)$ and $E(X|\mathbb{1}_{A})$?
I know that $E(X|A)=\frac{1}{P(A)}E(X\mathbb{1}_A)$, so this is covered.
But how is $E(X|\mathbb{1}_A)$ defined? It should look like this (or similar, if my memory serves me): $$E(X|A)P(\mathbb{1}_A)+E(X|A^c)P(\mathbb{1}_A).$$ Is this correct? And if so, why?
Also as a side question. What is or how do we define $P(\mathbb{1}_A)?$
This was a question on a theoretical test in our probability class.
The most general manner to define conditional expectation is through the $\sigma$-algebra definition, in which $E[X \mid \mathcal{G}]$ is the a.s.-unique random variable such that $E[X \mid \mathcal{G}]$ is $\mathcal{G}$-measurable and $E[X 1_A]=E[E[X \mid \mathcal{G}] 1_A]$ for all $A \in \mathcal{G}$. One can use this to define conditional expectation with respect to a random variable by identifying $E[X \mid Y]$ and $E[X \mid \sigma(Y)]$ where $\sigma(Y)$ is the smallest $\sigma$-algebra with respect to which $Y$ is measurable. In the case $Y=1_A$ this definition tells you
$$E[X \mid 1_A]=E[X \mid A] 1_A + E[X \mid A^c] 1_{A^c}.$$
Assuming $0<P(A)<1$, both of the expectations on the right side can be defined in the manner you mentioned. If either inequality fails then it becomes non-obvious how to define one of the terms, but in that situation the "bad" term can be simply ignored because it is only relevant with probability zero (and conditional expectation is only unique up to a null set).