MAP Estimator with Non Linear Function of Laplacian Noise

508 Views Asked by At

I need to calculate the MAP estimator of $ x $ in the following case:

$$ \left [ \begin{matrix} {y}_{1}\\ {y}_{2} \end{matrix} \right ] = \left [ \begin{matrix} x\\ x \end{matrix} \right ] + \left [ \begin{matrix} {n}\\ {n}^{2} \end{matrix} \right ] $$

Given the following distributions:

$$ x = \left\{\begin{matrix} 1 \;\;\; w.p. \;\; 0.5 \\ -1 \;\;\; w.p. \;\; 0.5 \end{matrix}\right. \; , \; n \sim Laplace\left ( a=0, b \right ) $$

Where w.p. stands for "With Probability".
The parameters of the Laplace distribution according to the Wikipedia page of Laplace Distribution an they are known (just treat $ b $ as a known parameter).

Now, Are there tricks to calculate the Maximum Likelihood of something like that?
I couldn't go through with it in direct calculation.

Though pay attention that by calculating $ {y}_{2} - {y}_{1} $ and solving a quadratic equation I can get two possible solutions to $ n $.
Yet still couldn't show that only one of the is the answer for sure (Namely the event that both solution hold the equation above is with probability 0).
Moreover, if $ {y}_{2} < 0 $ then $ x = -1 $ for sure since $ {n}^{2} $ must be positive.

Any assistance with that?

Thank You.

P.S. To explain my solution I attaching code (MATLAB):

% MAP with Laplacian Noise

%   y1 = x + n %   y2 = x + (n ^ 2)

xRx = (2 * (rand(1, 1) > 0.5)) - 1;  
vYSamples = xRx + (GenerateLaplaceRandSamples(0, 1, 1, 1) .^ [1; 2]);  

% y2 - y1 = (n ^ 2) - n;

noiseFunction = vYSamples(2) - vYSamples(1);

vNoiseSol = roots([1, -1, -noiseFunction]);

xOptionA = -1 + (vNoiseSol(1) .^ [1; 2]);  
xOptionB = -1 + (vNoiseSol(2) .^ [1; 2]);  
xOptionC = 1 + (vNoiseSol(1) .^ [1; 2]);  
xOptionD = 1 + (vNoiseSol(2) .^ [1; 2]);  

What I mean is that if I take the solution of the quadratic equation I have two options.
Now for $ x $ I also have 2 options, namely 4 options in total.
Now I try all of them and only one of them matches the input $ {y}_{1}, {y}_{2} $.
Yet I can't prove that the event that there will be more than one option to generate the measurements is with zero probability.
What am I missing?
Or maybe it could be calculated by definition (Calculating the ML Function).

2

There are 2 best solutions below

3
On BEST ANSWER

Edited: I've writen the derivation below, but really this problem is either a dirty trick or is too badly defined. Noticing that $(y_1-x)^2+x=y_2 $, and that $x^2=1$ with prob. 1, we get

$$x=\frac{y_2-y_1^2-1}{1-2 y_1}$$

with prob 1. The laplacian is not needed.


We want to maximize $P(x|{\bf y}) \propto P({\bf y}|x)p(x)$ as a function of $x$. As you noticed, ${\bf y}=(y_1,y_2)$ is linked via $y_2 - y_1 = n(n-1) \implies n=1/2\pm\sqrt{1/4+(y_2-y_1)}$. This requires $y_2-y_1\ge -1/4$. Also, in this region the equation $(y_1-x)^2+x=y_2$ has two solutions for $x$.

Inside this region,then $ P({\bf y}|x)= P(n=y_1-x)=\frac{1}{2b}\exp(-|y_1-x|/b) $

So, because that must be weighed equally in the two points $ x = \pm1 $, it would be enough to evaluate the previous in those two points, and pick the maximum, i.e., $ x_{MAP} = sign(y_1) $

1
On

This is just a binary hypothesis test with uniform costs under both hypotheses and prior $\pi_{-1}=\pi_1=\frac{1}{2}$ (in this case, known as a Minimum-probability-of-error rule, a ML rule since the prior is uniform or a MAP rule) (the link doesn't talk about costs, but you can find that formulation in many books, such as H.V. Poor's An Introduction to Signal Detection and Estimation in the section on Bayesian Hypothesis Testing).

$H_1 : Y \sim p_1$

$H_{-1} : Y \sim p_{-1}$

Let $p_1$ be the distribution of the vector $y$ when $x=1$, $p_{-1}$ be the distribution of the vector $y$ when $x=-1$. Then, you estimate $x=1$ if $\frac{p_1(y)}{p_{-1}(y)} \geq 1$ and estimate $x=-1$ if $\frac{p_1(y)}{p_{-1}(y)} < 1$ (the equality can be placed either way - it occurs on a set of probability measure 0 under both hypotheses)