Convergence of iterative solution of nonlinear system

103 Views Asked by At

I need to solve the following system of nonlinear equations $$A(x)x = B\, , \hspace{12mm} (1)$$ where $x, B \in \mathbb{R}^n$. Here $A$ is an $n \times n$ tridiagonal matrix with $x$-dependent diagonal elements, $$ A_{ii}(x) = \frac{c_i}{\|x\|} + d_i \hspace{8mm} \mathrm{with} \hspace{3mm} c_i, \hspace{0.5mm} d_i \in \mathbb{R} \hspace{8mm} \mathrm{and} \hspace{8mm} \|x\| =(x_1^2 + \cdots + x_n^2)^{1/2} \, ,$$ but whose other elements do not depend on $x$.

To this end I devised the following method. Pick a random $x_0 \in \mathbb{R}^n$, and iteratively solve $$A(x_j)x_{j+1} = B \, , \hspace{12mm} (2)$$ for $x_{j+1}$ until $\| x_{j+1} - x_j \|$ is below an acceptable threshold. Letting $x$ denote an exact solution to eq. (1) above, and defining error vectors as $x_j = x + \epsilon_j$, one observes roughly quadratic convergence $\| \epsilon_{j+1} \| \sim \mathcal{O}(\| \epsilon_j \|^2)$ in numerical experiments. However, plugging $x_j = x + \epsilon_j$ and $x_{j+1} = x + \epsilon_{j+1}$ into eq. (2) above and Taylor expanding the entries on the diagonal of $A$, one finds $$\epsilon_{j+1} = \frac{\epsilon_j \cdot x}{\| x \|^3} A(x_j)^{-1} \mathrm{diag}(c_1, \ldots, c_n) x \, .$$ This does not readily explain the observed convergence. In practice, the quantity $A(x_j)^{-1} \mathrm{diag}(c_1, \ldots, c_n)$ is numerically small as computed through the iterations, but it is not obvious (at least to me) why this should be the case. Convergence is almost as fast for non-tridiagonal matrices $A(x)$. (Incidentally, the equation system (1) arises from finite-difference methods applied to a second-order nonlinear ODE.)

My questions are now:

  1. Can anyone on this forum explain the experimentally observed quadratic convergence $\| \epsilon_{j+1} \| \sim \mathcal{O}(\| \epsilon_j \|^2)$?
  2. Is the approach in eq. (2) a standard approach and have a name?
1

There are 1 best solutions below

4
On

So, you are running a fixed point iteration of the form $x^{(j+1)}= G(x^{(j)})$, with $G(x) = (A(x))^{-1}B$. Assuming $z$ is the fixed point,Taylor expansion gives you $$ G(x^{(n)}) = G(z)+J_G(z)(x^{(n)}-z)+ O(\|x^{(n)}-z\|^2) $$ or, $$ x^{(n+1)}-z = J_G(z)(x^{(n)}-z)+O(\|x^{(n)}-z\|^2). $$

If you are observing quadratic convergence, this must be explained by having $J_G(z)=0$. Checking this equality would do the trick.


$J_G$ stands for the Jacobian matrix.