How to Find the Matrix in the Simple Iteration Method for Nonlinear Systems

143 Views Asked by At

I'm trying to write a c++ programme to find the solutions to a nonlinear system using the simple iteration method described on page 120 here. It says: Given a system of nonlinear equations

$$\left\{\begin{array}{l} f_{1}\left(x_{1}, \ldots, x_{m}\right)=0 \\ f_{2}\left(x_{1}, \ldots, x_{m}\right)=0 \\ \vdots \\ f_{m}\left(x_{1}, \ldots, x_{m}\right)=0 \end{array}\right.$$

If we let $$\mathbf{F}=\left(\begin{array}{c} f_{1}(\mathbf{x}) \\ f_{2}(\mathbf{x}) \\ \vdots \\ f_{m}(\mathbf{x}) \end{array}\right): \mathbb{R}^{m} \rightarrow \mathbb{R}^{m}$$

Then we can rewrite the first expression as $\mathbf{F}(\mathbf{x}) = 0, \qquad \mathbf{x} = \mathbf{G}(\mathbf{x}) \qquad \mathbf{G}: \mathbb{R}^m \to \mathbb R^m$.

Solution $\boldsymbol{\alpha}: \boldsymbol{\alpha}=\mathbf{G}(\boldsymbol{\alpha})$ is called a fixed point of G. Example: $\mathbf{F}(\mathbf{x})=0 \space ,$ $\mathbf{x}=\mathbf{x}-A \mathbf{F}(\mathbf{x})=\mathbf{G}(\mathbf{x}) \quad$ for some non singular matrix $A \in \mathbb{R}^{m \times m}$.

Iteration: initial guess $x_{0}$ $$ \mathbf{x}_{n+1}=\mathbf{G}\left(\mathbf{x}_{n}\right), \quad n=0,1,2, \ldots $$

An example is given on page 135.

Solve $\left\{\begin{array}{l}f_{1} \equiv 3 x_{1}^{2}+4 x_{2}^{2}-1=0 \\ f_{2} \equiv x_{2}^{3}-8 x_{1}^{3}-1=0\end{array}, \text { for } \boldsymbol{\alpha} \text { near }\left(x_{1}, x_{2}\right)=(-.5, .25)\right.$

The iterative solution given is $$ \left[\begin{array}{c} x_{1, n+1} \\ x_{2, n+1} \end{array}\right]=\left[\begin{array}{c} x_{1, n} \\ x_{2, n} \end{array}\right]-\left[\begin{array}{cc} .016 & -.17 \\ .52 & -.26 \end{array}\right]\left[\begin{array}{c} 3 x_{1, n}^{2}+4 x_{2, n}^{2}-1 \\ x_{2, n}^{3}-8 x_{1, n}^{3}-1 \end{array}\right] $$

The notes don't explain how to find the matrix A. How can I find the matrix A?

2

There are 2 best solutions below

0
On BEST ANSWER

They are using the inverse of the Jacobian of the system. We have

$$F(x_1, x_2) = \begin{bmatrix} \dfrac{\partial f_1}{\partial x_1} & \dfrac{\partial f_1}{\partial x_2} \\ \dfrac{\partial f_2}{\partial x_1} & \dfrac{\partial f_2}{\partial x_2} \end{bmatrix} = \begin{bmatrix} 6 x_1 & 8 x_2 \\ -24 x_1^2& 3 x_2^2 \end{bmatrix}$$

Using the given starting point $(x_1(0), x_2(0)) = (-0.5, 0.25)$, we have

$$F(-0.5, 0.25) = \begin{bmatrix} -3. & 2. \\ -6. & 0.1875 \end{bmatrix} \implies F^{-1}(-0.5, 0.25) =\begin{bmatrix} 0.0163934 & -0.174863 \\ 0.52459 & -0.262295 \end{bmatrix}$$

0
On

Check Quasi-Newton methods for more efficient techniques. Computing the full Jacobian (and solving the resulting system of linear equations each step) is expensive. Quasi-Newton methods compute cheap approximations to the inverse of the Jacobian from old function values (somewhat along the lines of the secant method in 1D, where it is a method of choice unless derivatives are very cheap/computed as a byproduct of computing the function -- like Horner's rule for polynomials).