Let $(X, F, \mu)$ be a probability space and suppose that $A$ is a sub-sigma-algebra of $F$ such that $\forall S \in A: \mu(S)\in \{0, 1\}$. Let $f \in L^2(X,F,\mu)$ and consider the conditional expected value $g \equiv \mathbb{E}(f\mid A)$. In general (for any sub sigma-algebra $A$ of $F$) we know that the conditional expected value satisfies $\mathbb{E}(1_Ag) = \mathbb{E}(1_Af) = \int_X1_Afd\mu = \int_Afd\mu$. I am trying to understand how come if $A$ consists of those sets with measure zero or one, then why is $g = \int_Xfd\mu$?
The context for this question is Lemma 3 of a blog post discussing Birkhoff's and von Neumann's ergodicity theorems. In particular I am interested in understanding von Neumann's theorem and while I am aware of the fact that the operator version of the theorem give the $L^2$ version (as conditional expectation is an ortogonal projection), the details of why the trivialish sub-sigma algebra $A$ results in $g \equiv \int_Xfd\mu$ is not entirely clear to me: Intuitively it does make some sense to me, because if we are restricting ourselves to only sets with measure 0 or 1, then have pretty as much information as we can have w.r.t. the probability measure $\mu$. But this analogue does not constitute to a good proof.
$\int_X fd\mu$ is a constant, so it is measurable w.r.t. $A$. Call this constant $c$. It remains to show that $\int_E fd\mu=\int_E c d\mu$ for every $E \in A$.
Case 1): $\mu (E)=0$. In this case both sides are cleary $0$.
Case 2) $\mu (E)=1$. In this case left side is $\int_X fd\mu$ because $\int_{X\setminus E} fd\mu=0$. So the left side is equal to $c$ and the right side is $c \mu (E)=c$.