Why mean independence does not imply independence? I considered
$$ \mathbb{E}(X\mid Y=y) = \mathbb{E}(X) \text{ for all } y\in \mathcal{Y} $$
This implies that
$$\int_\mathcal{X} x f_{X\mid Y} (x,y) \,dx = \int_{\mathcal{X}} x f_X (x) \,dx \text{ for all } y \in \mathcal{Y}$$
For the equality to hold, the left hand side cannot have any $y$ in it after we do the integration. This seems to suggest that $f_{X\mid Y} (x,y)$ has to be free of $y$. Then, if $f_{X\mid Y} (x,y)$ is free of $y$, and the equality holds, it seems that we must have $f_{X\mid Y} (x,y) = f_X (x)$.
I think the last statement I made could have some problem because we probably only have $f_{X\mid Y} (x,y) = f_X (x)$ almost everywhere. But what would be an elementary counterexample?
Let the pair $(X,Y)$ be uniformly distributed over a circle (say the unit circle centered at $(0,0)$). Then for each $y$ we have ${\mathbb E}(X\mid Y=y)=0$, which equals ${\mathbb E}(X)$, but we don't have independence between $X$ and $Y$.