Uniquene solution to minimisation of a Non Linear Objective Function

23 Views Asked by At

I am trying to estimate the path of a random described by the following SSM \begin{align} x_{t+1} = x_{t} + q_{t+1} \newline y_{t+1} = h(x_{t+1}) + r_{t+1} \end{align} where $h(x_{t+1}) = \sqrt{(x_{t+1}(1) - A(1))^2 + (x_{t+1}(2) - A(2))^2}$ and $q_{t+1} \sim \mathcal{N}(0,Q)$ , $r_{t+1} \sim \mathcal{N}(0,R)$ with $R$ and $Q$ both equal to $0.1 \mathcal{I}$. We generate data for $L$ steps through the above State-Space Model and then try to estimate $x_{t}$ at each time-step by minimizing the following $l_p$ norm cost function.We minimize this function through Gradient Descent. \begin{align} \lVert y_{t} - h(x_{t}) \rVert_{p} \end{align} But it seems that minimizing this $l_{p}$ does not lead to the latent $x_{t}$ we were hoping to obtain. In addition to this we tried several regularizers, like L2 and L1 Norm regularizers. The function is minimized (we have observed this) as our gradient descent goes through its iterations but the $x_t$ that we ultimately reach is not the correct one. We suspect that we get stuck in some local minima close to, at first, the initial guess and then close to each subsequent estimate for the gradient descent. Is there some form of regularization or some alterations we could make to our objective function that could lead us to a unique solution, which could get us the correct $x_{t}$. We tried this exact procedure with a more minimalistic, toy problem where our $h(x_{t+1}) = \sqrt{(x_{t+1} - A)^2}$with $p = 2$. Here we observed that we get the exact value for $x_{t}$ until $x_{t}$ gets very close to $A$ where we observed that the estimate for $x_{t}$ gets mirrored around $A$. Is there a way that we can direct our optimization problem to be solved for the $x_{t}$ close to the one, for which the incoming measurements $y_{t}$ were generated through the above State Space Model.