Error propagation and Gradient Descent

60 Views Asked by At

I was looking at error propagation (or propagation of uncertainty in wikipedia: https://en.wikipedia.org/wiki/Propagation_of_uncertainty)

My primary concern is getting an estimate of error of estimation $x$ from a measurement $f$ of a nonlinear equation say $f=g(x)$. I solve for $x$ using gradient descent in iterative manner: minimizing the loss funciton $\mathcal{L}=||f-g(x)||^2$.

$x^{t+1}=x^{t}-\lambda \nabla \mathcal{L}$

Now, I was trying to think how do I obtain the error of $x$ in terms of the error of $f$ in some form like:

$\Delta x^* = h(\Delta f)$

But not sure how to do it as there is no direct inverse close form relation as in linear regression [which I am able to do]. One option tells me that it could be $\Delta x = \sqrt{(\partial f / \partial x)^{-1} \Delta f }$. Not sure if this is correct. Even if this is correct $\partial f / \partial x$ is also a function of $x$, which makes is complicated.

After that, I want to extend the formalism to a general case:

$f=g(x)+g(y)$

and I want to know the error of both $x$ and $y$. I did similar calculation like above, but it is also a mess. Any idea would be helpful.