I am trying to understand the definition of conditional expectation with respect to a sub-$\sigma$-algebra. I think the top answer for this question: Why this weird definition of conditional expectation? is similar to what I'm after but it didn't quite make sense to me. The example given in my lecture notes is as follows.
Let $(\Omega,\mathcal A, \mathbb P)$ be a probability space. $X$ is a random variable and $A\in \mathcal A$. Suppose $\mathcal F = \sigma$ ({A}) = {$\emptyset,A,A^{c},\Omega$}, a certain sub-$\sigma$-algebra of $\mathcal A$. We define the conditional expectation given $\mathcal F$ as the random variable on $\Omega$ given by
$\mathbb E[X|\mathcal F]:\Omega\rightarrow\mathbb R$, where $\mathbb E[X|\mathcal F](\omega)=$
\begin{cases} \mathbb E[X|A] & \forall\omega\in A \\ \mathbb E[X|A^{c}] & \forall\omega\in A^{c} \ \ \end{cases}
I would like to know what this function is telling us. Why is it useful to have a function that, when given an outcome, tells us the expectation of the random variable given a larger set? If we have the outcome, isn't the expected value of the random variable just $X(\omega)$? Would it not make more sense for this function to have the domain $\mathcal A$?
EDIT: Is the idea that we don't know the exact $\omega\in\Omega$, only that $\omega\in A$ or $\omega\in A^{c}$? Since if we knew the exact $\omega$ we would know $X(\omega)$, therefore not need to estimate $X$?