Optimization function using it's first order expansion

37 Views Asked by At

I'm looking for solving the following problem : $$\min_{x\in \mathbb{R}^n}||f(x)-y||^2,$$ where $n \in \mathbb{N}$ and $y\in \mathbb{R}^n$. This can be done by using Newton's algorithm to find a zero of the gradient of the function $||f(x)-y||^2$ starting from an initial point $x_0$. Because the calculation of $f$ takes a lot of time (the relation between $x$ and $f(x)$ is not explicit), I'm using the following steps:

  • I expand the function using: $f(x) \approx f(x_0) + (x-x_0).\nabla f(x_0)$ and I solve the problem: $$\min_{x\in \mathbb{R}^n}||f(x_0) + (x-x_0).\nabla f(x_0)-y||^2,$$ which gives $x_1 \in \mathbb{R}^n$.
  • $\cdots$
  • I expand the function using: $f(x) \approx f(x_i) + (x-x_i).\nabla f(x_i)$ and I solve the problem: $$\min_{x\in \mathbb{R}^n}||f(x_i) + (x-x_i).\nabla f(x_i)-y||^2,$$ which gives $x_{i+1} \in \mathbb{R}^n$.
  • $\cdots$
  • The steps stops when a stop criterion on the last found $x^*$ is satisfied or if a maximum number of iteration is reached.

Do you think the $x^*$ I found in the last step of these steps is a solution to the initial problem? If yes, how can it be proved?

Thanks in advance for your help.