Consider the situation $r = a+n$, where $n \sim \mathcal{N}(0,\sigma_n^2)$. I am having confusion with respect the computation of $p_{a|r}(A)$ for the above scenario.
Option 1, $r = a+n \implies a = r - n \implies p_{a|r}(A) = \mathcal{N}(R, \sigma_n^2)$
Option 2, from Bayes theorem gives, $p_{a|r}(A) = \dfrac{p_{r|a}(R)p_a(A)}{p_r(R)}$. Assuming $p_a(A) = \mathcal{N}(0, \sigma_a^2)$, we get $p_{a|r}(A) = \mathcal{N}\left(\dfrac{R}{1 + \frac{\sigma_n^2}{\sigma_a^2}}, \dfrac{\sigma_n^2}{1 + \frac{\sigma_n^2}{\sigma_a^2}}\right)$.
1. My first question is why do these two approches give different anwers? I understand option 2 depends on the prior for $a$, but even in that scenario we compute the likelihood $p_{r|a}(R)$ using the approach described in option 1.
2. Option 2 converges to option 1 when $\sigma_a^2 \to \infty$. This can be seen with the gaussian prior, however is it true for other prio distributions on $a$ as well?
2026-03-26 17:51:28.1774547488
Confusion in MAP estimation
49 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
The dependence structure of $a,n$ and $r$ is important! Typically in such "inverse problem" scenarios you assume that $n$ is independent of $a$ and therefore $$ p_{r|a} = \mathcal{N}(A, \sigma_n^2), $$ but since $r$ and $n$ are not independent (knowing $r$ gives some information on $n$), you can not conclude $$ p_{a|r} = \mathcal{N}(R, \sigma_n^2). $$ (I tried to adopt your notation, which is not very intuitive to me). So, under the above assumption, Option 1 is not correct, while Option 2 is (for that prior).
As far as I know, any "stretched out" prior will have the property you describe in question 2 as long as it is strictly positive everywhere.