I know it is not necessarily true. We can construct simple coin tossing counterexamples as described on posts like Conditional expectation of independent random variables before.
I'd like to take the question forward and seek a measure-theoretic way of reasoning about this and providing an appropriate enough description of the sub-$\sigma$-algebra $\mathcal{G} \subset \mathcal{F}$ (with $\mathcal{F}$ being the $\sigma$-algebra on which the random variables are defined) detailing when we would have independence when we wouldn't. I have tried to do so by working with the definitions but have been unable to come to anything well-defined.
Considering the simple case of $X = 1_{A}$ and $Y= 1_{B}$ (essentially to make the $E$ operator indistinguishable from the $P$ operator and reason using the basic definition too. Also, using an MCT argument, we could extend from this case to more general RVs), we have
$$P(A \cap B) = E(1_{A\cap B}) = E[1_A \cdot 1_B] = E[X Y] = E[X]E[Y] = P(A)P(B) $$
For independence of $E[X|\mathcal{G}]$ and $E[Y|\mathcal{G}]$, we are looking for $$E[E[X|\mathcal{G}]] \cdot E[E[Y|\mathcal{G}]] = E[E[X|\mathcal{G}]\cdot E[Y|\mathcal{G}] ]$$
A simple case where this relation would hold if $X$ and $Y$ were themselves $\mathcal{G}$ measurable, that is, $\sigma(X) \subset \mathcal{G}$ and $\sigma(Y) \subset \mathcal{G}$
For the other cases (in terms of containment relations between $\mathcal{G}$ and $\sigma(X)$ and $\sigma(Y)$ and when they have 'no relation') I haven't been able to come up clear enough descriptions. Apologies for the title of the question, given that it's already been answered. Just trying to get as many responses as possible.