Suppose $$z \sim \mathcal{N}(0,I) $$ $$\epsilon \sim \mathcal{N}(0,\Gamma) $$ and $$x=\mu + A\cdot z +\epsilon$$
I understand that the $x$ given $z$ is distributed $$x|z\sim \mathcal{N}(\mu + A\cdot z,\Gamma)$$ because when we are conditioning on $z$, we are to treat it like a constant (it is not a random variable, as it is "given").
My question concerns calculating the distribution of $z|x$. We have $$x=\mu + A\cdot z +\epsilon$$ $$\implies$$ $$A^{-1}\cdot(x-\mu-\epsilon)=z$$ But if I proceed as above,I get a wrong result... $$z=A^{-1}\cdot x -A^{-1}\cdot\mu -A^{-1}\cdot\epsilon$$ $$E(z|x)=E(A^{-1}\cdot x) -E(A^{-1}\cdot\mu) - E(A^{-1}\cdot\epsilon)$$ $$E(z|x)=A^{-1}\cdot x -A^{-1}\cdot\mu - 0$$ $$E(z|x)=A^{-1}\cdot (x-\mu) $$
Which is wrong, as the text I am reading states: $$E(z|x)=A^t(AA^t+\Gamma)^{-1}(x-\mu)$$
Where am I making a mistake? How can I derive the correct result?
This is a very good question. Your finding is related to the regression dilution phenomenon: https://en.wikipedia.org/wiki/Regression_dilution
In the beginning, it is implicitly assumed that $z$ and $\epsilon$ are independent Gaussian deviates. This means $x$ and $\epsilon$ are correlated. You cannot have both $x$ and $z$ being independent with $\epsilon$. Therefore, even if you write $z=A^{-1} (x-\mu-\epsilon)$, it does not follow that the expectation value of $z$ given $x$ is $A^{-1}(x-\mu)$, because the correlation between $x$ and $\epsilon$ means that the expectation value of $\epsilon$ given $x$ is no longer $0$. You can read about regression dilution to deepen your understanding of this.