I need to calculate the MAP estimator of $ x $ in the following case:
$$ \left [ \begin{matrix} {y}_{1}\\ {y}_{2} \end{matrix} \right ] = \left [ \begin{matrix} x\\ x \end{matrix} \right ] + \left [ \begin{matrix} {n}\\ {n}^{2} \end{matrix} \right ] $$
Given the following distributions:
$$ x = \left\{\begin{matrix} 1 \;\;\; w.p. \;\; 0.5 \\ -1 \;\;\; w.p. \;\; 0.5 \end{matrix}\right. \; , \; n \sim Laplace\left ( a=0, b \right ) $$
Where w.p. stands for "With Probability".
The parameters of the Laplace distribution according to the Wikipedia page of Laplace Distribution an they are known (just treat $ b $ as a known parameter).
Now, Are there tricks to calculate the Maximum Likelihood of something like that?
I couldn't go through with it in direct calculation.
Though pay attention that by calculating $ {y}_{2} - {y}_{1} $ and solving a quadratic equation I can get two possible solutions to $ n $.
Yet still couldn't show that only one of the is the answer for sure (Namely the event that both solution hold the equation above is with probability 0).
Moreover, if $ {y}_{2} < 0 $ then $ x = -1 $ for sure since $ {n}^{2} $ must be positive.
Any assistance with that?
Thank You.
P.S. To explain my solution I attaching code (MATLAB):
% MAP with Laplacian Noise
% y1 = x + n % y2 = x + (n ^ 2)
xRx = (2 * (rand(1, 1) > 0.5)) - 1;
vYSamples = xRx + (GenerateLaplaceRandSamples(0, 1, 1, 1) .^ [1; 2]);
% y2 - y1 = (n ^ 2) - n;
noiseFunction = vYSamples(2) - vYSamples(1);
vNoiseSol = roots([1, -1, -noiseFunction]);
xOptionA = -1 + (vNoiseSol(1) .^ [1; 2]);
xOptionB = -1 + (vNoiseSol(2) .^ [1; 2]);
xOptionC = 1 + (vNoiseSol(1) .^ [1; 2]);
xOptionD = 1 + (vNoiseSol(2) .^ [1; 2]);
What I mean is that if I take the solution of the quadratic equation I have two options.
Now for $ x $ I also have 2 options, namely 4 options in total.
Now I try all of them and only one of them matches the input $ {y}_{1}, {y}_{2} $.
Yet I can't prove that the event that there will be more than one option to generate the measurements is with zero probability.
What am I missing?
Or maybe it could be calculated by definition (Calculating the ML Function).
Edited: I've writen the derivation below, but really this problem is either a dirty trick or is too badly defined. Noticing that $(y_1-x)^2+x=y_2 $, and that $x^2=1$ with prob. 1, we get
$$x=\frac{y_2-y_1^2-1}{1-2 y_1}$$
with prob 1. The laplacian is not needed.
We want to maximize $P(x|{\bf y}) \propto P({\bf y}|x)p(x)$ as a function of $x$. As you noticed, ${\bf y}=(y_1,y_2)$ is linked via $y_2 - y_1 = n(n-1) \implies n=1/2\pm\sqrt{1/4+(y_2-y_1)}$. This requires $y_2-y_1\ge -1/4$. Also, in this region the equation $(y_1-x)^2+x=y_2$ has two solutions for $x$.
Inside this region,then $ P({\bf y}|x)= P(n=y_1-x)=\frac{1}{2b}\exp(-|y_1-x|/b) $
So, because that must be weighed equally in the two points $ x = \pm1 $, it would be enough to evaluate the previous in those two points, and pick the maximum, i.e., $ x_{MAP} = sign(y_1) $