What is the purpose of the Jacobian in iterative error minimisation?

32 Views Asked by At

I'm not great at maths so go easy.

I'm following this video on how to solve a problem in which repeated guesses are taken at the inputs, $x_{1-6}$, of a function $f(x_1,x_2,x_3,x_4,x_5,x_6)$ and the output, a vector $y = f(x_{1-6})$ is used to determine the next iteration. $y0$ the desired output is known.

To determine the next guess, $x_{1-6_2}$, the numerical Jacobian $J = (f(x_{1-6}+e) - f(x_{1-6}))/e$ is calculated. With $dy = y0 - y$, $dx =J^+\,dy$ gives the next iteration of guesses $x_{1-6}$: $x_{n+1} = x_n+dx$.

This $dx =J^+\,dy$, gives the values for $dx$ that is the least square solution for $J\,dx = dy$.

So my confusion is about the Jacobian and why it's used here. Why does finding the values of $dx$ that minimises the least square $J\,dx = dy$ give guesses that incrementally move $x$ to the correct answer?

Also would it be more accurate of those '$d$'s were partial derivatives?

Thanks, I'm trying to understand this so I can give an explanation of it to other engineers.