Is there some sort of intuition or a good ilustrative example for random variables being $\sigma$-algebra measurable? I understand the definition, but when looking at martingales, the meaning of random variables being measurable eludes me. So my question is mainly aimed on the case of martingales where sequence of random variables is adapted to some filtration.
In Interpretation of sigma algebra, the asker asks (among many others) a similar question, but I don't think it contains an actual answer to this question.
Maybe this can help you to understand the concept of conditional expectation, behind your question.
Suppose you have a probability space $(\Omega, \mathcal P (\Omega), \mathbb{P})$, where $\mathcal P (\Omega)$ denotes the set of all possible subsets of $\Omega$ (evidently, a $\sigma$-algebra), and $\mathbb{P}$ is a probability measure (in this case, a function from $\mathcal P (\Omega)$ to [0,1]).
Suppose you have a random variable (measurable function) $X:(\Omega, \mathcal P (\Omega)) \to (\mathbb{R}, \mathcal B (\mathbb R ))$, where $\mathcal B (\mathbb R )$ is the usual Borel $\sigma$-algebra.
Take as a sub-$\sigma$-algebra the trivial one, $\mathcal F = \{\emptyset, \Omega\}$. Suppose we only know the conditional expectation $\mathbb E(X | \mathcal F)$, but not $X$ itself. How much do we know about X? Well, $Y = \mathbb E(X | \mathcal F)$ is a random variable, $\mathcal F$/ $\mathcal B (\mathbb R )$- measurable. From Y, we can only determine ONE thing (think about this!): $$\mathbb E(Y) = \mathbb E(\mathbb E(X | \mathcal F)) = \mathbb E X.$$ So, what is $\mathbb{E}(X | \mathcal F)$? It is the most simplified knowledge that we can have; we arrive at this if we determine the expectation of the random variable but know nothing about its values in particular events (in $\mathcal P (\Omega)$).
(In fact, $Y$ is constant... otherwise, it would not be measurable.)
Suppose now that we enlarge this $\sigma$-algebra, say to $\mathcal F' = \{\emptyset, A, A^c, \Omega\}$, for some non-trivial set $A$. Again, suppose that we only know $\mathbb{E}(X | \mathcal F')$, not X. Then, we can determine three things about the variable: $$\mathbb E(X 1_A), \, \mathbb E(X 1_{A^c}) \text{ and } \mathbb E (X).$$ Conclusion: a bigger $\sigma$ algebra implies more knowledge about the random variable X (we are interested in that one)!
Check that in the extreme case, when $\mathcal F'' =\mathcal P (\Omega)$, the knowledge of $\mathbb E (X|\mathcal F'')$ allows us to determine all the expected values $\mathbb E(X 1_{\{X=x\}})= x\mathbb P (X=x)$, because the events $\{X=x\}$ are contained in $\mathcal F''$ (like every other subset). If $X$ only take a finite number of different values (for instance, when $\Omega$ is finite), these expectations are enough to determine the probability of all the events $\{X=x\}$. (When $X$ is continuous, the above reasoning is not very useful, for the subsets $\{X=x\}$ have probability zero and the expectations above are zero too. Anyway, by the general properties of the conditional expectation, $\mathbb E(X|\mathcal F'') = X$, because $X$ is $F''$-measurable. In this sense, we can say that the variable is recovered from its conditional expectation.)