We have to derivate the MAP estimation of a signal $x(m)$ observed in AWGN $n(m)$, resulting $y(m) = x(m) + n(m)$, supposing no zero mean in any process. I have done this:
To find the MAP estimator, we have to find $x$ that verifies: $argmax[p(y/x)p(x)]$ or similary: $argmax[\log(p(y/x)) + \log(p(x))]$
Where:
$$p(x) = \frac{1}{\sqrt{2\pi \sigma_x^2}} e^{-\dfrac{(x-\mu_x)^2}{2\sigma_x^2}}$$
$$p(y/x) = \frac{1}{\sqrt{2\pi (\sigma_x^2 + \sigma_n^2)}} e^{-\dfrac{(y-\mu_x - \mu_n)^2}{2(\sigma_x^2 + \sigma_n^2)}}$$
Thus, taking into account that $y = x+n$:
$$\dfrac{\partial log(p(y/x)p(x))}{\partial x} = 0 \rightarrow \dfrac{x-\mu_x}{\sigma_x^2} = - \dfrac{y-\mu_y-\mu_n}{\sigma_x^2 + \sigma_n^2} $$
And finally
$$x = \dfrac{\sigma_x^2}{\sigma_x^2 + \sigma_n^2}(\mu_n - y) + \mu_x \dfrac{2\sigma_x^2 + \sigma_n^2}{\sigma_x^2 + \sigma_n^2}$$
However, the solution should be:
$$x = \dfrac{\sigma_x^2}{\sigma_x^2 + \sigma_n^2}(y-\mu_n) + \dfrac{\sigma_n^2}{\sigma_x^2 + \sigma_n^2} \mu_x$$
And I don't even know if the probability density functions of the beginning are correct, they sounded to me. For that, I'd like if you could explain me how to derivate mainly $p(y/x)$.
I hope someone can help me. Thank you for your responses.
I think your description about the notation of $y(m), x(m)$ and $n(m)$ is a little bit confusing. I here just suppose that the $m$ here refers to the number of independent samples you have.
Just you have mentioned, according to the MAP rule, the estimation of $x$ is $$\arg \max_x \left[\log(p(y|x)) + \log(p(x))\right],$$
here we assume that both $x$ and $n$ are with normal distribution $\mathbf{N}(\mu_x,\sigma_x^2)$ and $\mathbf{N}(\mu_n,\sigma_n^2)$, respectively. Then the PDF of $x$ is obvious just you have written out. However, the PDF of $y$ given $x$ that is $p(y|x)$ is not exactly you heard about. Since the value of $x$ is given, then the uncertainty of $x$ should be reduced. Thus, this conditional probability is: $$p(y|x)=\frac{1}{\sqrt{2\pi\sigma_n^2}}\exp{\left(-\frac{(y-x)^2}{2\sigma_n^2}\right)}$$ And from what I have illustrated at the beginning of this answer, the $y$ is actually not 1 sample but m independent samples instead. The conditional probability of all these m samples is $$p(y|x)=\prod_{i=1}^m\frac{1}{\sqrt{2\pi\sigma_n^2}}\exp{\left(-\frac{(y_i-x-\mu_n)^2}{2\sigma_n^2}\right)}$$ Taking this conditional probability into the MAP rule yeilds $$\arg\max_x \quad const.+\sum_{i=1}^m\left(-\frac{(y_i-x-\mu_n)^2}{2\sigma_n^2}\right)+\left(-\frac{(x-\mu_x)^2}{2\sigma_x^2}\right)$$ The $const.$ is the constant which is generated through logrithm and with no need to be concerned. The problem above is actually equivalent to $$\arg\min_{x}\quad\sum_{i=1}^m\left(\frac{(y_i-x-\mu_n)^2}{2\sigma_n^2}\right)+\left(\frac{(x-\mu_x)^2}{2\sigma_x^2}\right)$$ By taking derivation with respect to $x$ and making the derivation equals to 0, one can obtain that $$\sum_{i=1}^m\frac{y_i-x-\mu_n}{\sigma_n^2}\times(-1)+\frac{x-\mu_x}{\sigma_x^2}=0$$ and $$x^*=\frac{\sigma_x^2}{\sigma_n^2+m\sigma_x^2}\sum_{i=1}^{m}(y_i-\mu_n)+\frac{\sigma_n^2}{\sigma_n^2+m\sigma_x^2}\mu_x$$ If $m=1$, you could get the correct answer you want! Hope this will help you!