Gauss-Newton method, where did sigma inverse come from?

67 Views Asked by At

I'm studying the Gauss-Newton Method from "slambook-en" chapter 5 on optimization (the books is made free online by the author in case you need to see it). I've attached a picture of the example the author is using to elaborate the method. My doubt is with the sudden appearance of the sigma inverse in the final formulation. I understand how the general Gauss Newton method works out to approximate the hessian matrix, but I'm struggling to understand the sigma inverse. It is of course (or at least I think) related to the 'w' in page 1 that I have attached which is used to simulate the gaussian noise, but that is the extent to which I understand it. Any help or insights would be greatly appreciated!

Page 1

enter image description here

enter image description here

enter image description here

1

There are 1 best solutions below

0
On

Yes, there is no point in including the $(\sigma^2)^{-1}$.

However, it would make sense, and constitute Weighted least squares if $\sigma^2$ were not the same across all data points ("equations"), i.e., $\sigma^2_i$, with formulas involving $(\sigma^2_i)^{-1}$.

More generally, $\Sigma^{-1}$ would be used (Generalized least squares)if the covariance among the data points were $\Sigma$. Use of $(\sigma^2_i)^{-1}$ corresponds to the special case in which the covariance matrix, and hence its inverse, being diagonal.