What numerical method can be recommended to solve an exponential system of equations with two unknowns?

260 Views Asked by At

Everytime I have to work with chemical kinetics, I end up with having to deal with a system of two equations involving the unknowns being on the exponents as follows:

Let's say:

$$\cases{4^x+7^y=53\\ 3^x+8^y=67}$$

Typically when I have a system of linear equations such as

$$\cases{ax+by=c\\ a_1x+b_1y=c_1}$$

I could use Gauss Jordan or any of the methods involving matrices or whatsoever.

However when the unknowns appear in the exponential makes it unable to solve with regular methods covered in algebra.

Therefore what would be the recommended numerical method to solve this with a good degree of accuracy to the exact answer?.

The kind of method which I'm looking for is something which doesn't involve much iterations and can be easier to apply each time such situation event (two unknowns as in this context) arises.

I'll appreciate that an answer could use my example and be the most detailed as possible.

Can maple be used on this using Newton Raphson?

Since I use maple, my intention is that a suggested answer could indicate a short sequence or algorithm which can be input in maple to solve this kind of situations other than using the command solve.

3

There are 3 best solutions below

8
On

The vector version of Newton's method goes like this: suppose you have $n$ equations in $n$ unknowns where the unknowns are $x_1, x_2, \dots, x_n$ and the equations are $$f_i(x_1, x_2, \dots, x_n)=0$$ for $i=1, 2, ..., n$. Then the linear approximation to each equation is $$\begin{align}f_i(r_1, r_2, \dots, r_n)&=f_i((r_1-x_1)+x_1,(r_1-x_2)+x_2,\dots,(r_n-x_n)+x_n)\\ &\approx f_i(x_1, x_2, \dots, x_n)+\sum_{j=1}^n(r_j-x_j)\frac{\partial}{\partial x_j}f_i(x_1, x_2, \dots, x_n)=0\end{align}$$ Where $x_1=r_1,x_2=r_2,\dots,x_n=r_n$ is the true solution the the system of equations. If we construct the vector $\vec b$ where $b_i=f_i(x_1, x_2, \dots, x_n)$ and the matrix $M$ where $M_{ij}=\frac{\partial}{\partial x_j}f_i(x_1, x_2, \dots, x_n)$ then we can solve for $$M(\vec x-\vec r)=M\vec y=\vec b$$ Then our next approximation to the solution is $x_j\leftarrow x_j-y_j$ Here is a Matlab example for your posted problem

% kinetics.m

x = [0.5;
    2.5];
err = 1;
while abs(err) > 1.0e-12,
    func = [4^x(1)+7^x(2)-53;
        3^x(1)+8^x(2)-67];
    jacobian = [4^x(1)*log(4) 7^x(2)*log(7);
        3^x(1)*log(3) 8^x(2)*log(8)];
    err = jacobian\func;
    fprintf('Error = %e\n', norm(err));
    x = x-err;
end
fprintf('Solution is x = %.16f, y = %.16f\n',x);

Output is:

Error = 7.745216e-01
Error = 3.381383e+00
Error = 6.486683e-01
Error = 5.944094e-01
Error = 4.931151e-01
Error = 3.146575e-01
Error = 1.086841e-01
Error = 1.055246e-02
Error = 9.001149e-05
Error = 6.479704e-09
Error = 2.232400e-15
Solution is x = 1.0000000000000007, y = 2.0000000000000000

It took a while to get going, but once it gets close Newton's method is generally quadratically convergent.

EDIT: Let's do one Newton-Raphson iteration by hand. Start with $$\vec x=\begin{bmatrix}1.1193\\1.9988\end{bmatrix}$$ Evaluate the residual $$\vec f\left(\vec x\right)=\begin{bmatrix}4^{x_1}+7^{x_2}-53\\ 3^{x_1}+8^{x_2}-67\end{bmatrix}=\begin{bmatrix}4^{1.1193}+7^{1.9988}-53\\ 3^{1.1193}+8^{1.9988}-67\end{bmatrix} =\begin{bmatrix}0.605103\\0.260622\end{bmatrix}$$ And the Jacobian $$J=\begin{bmatrix}4^{x_1}\ln4&7^{x_2}\ln7\\ 3^{x_1}\ln3&8^{x_2}\ln8\end{bmatrix}= \begin{bmatrix}4^{1.1193}\ln4&7^{1.9988}\ln7\\ 3^{1.1198}\ln3&8^{1.9988}\ln8\end{bmatrix}= \begin{bmatrix}6.542462&95.12721\\ 3.75739&132.7526\end{bmatrix}$$ Now we have to solve $J\vec\epsilon=\vec f$ we form the augmented matrix and perform Gaussian elimination. $$M=\begin{bmatrix}6.542462&95.12721&0.605103\\ 3.75739&132.7526&0.260622\end{bmatrix}$$ Divide the first row by $M_{11}=6.542462$ $$M=\begin{bmatrix}1&14.53997&0.092489\\ 3.75739&132.7526&0.260622\end{bmatrix}$$ Subtract $M_{21}=3.75739$ times the first row from the second $$M=\begin{bmatrix}1&14.53997&0.092489\\ 0&78.12024&-0.08689\end{bmatrix}$$ Divide the second row by $M_{22}=78.12024$ $$M=\begin{bmatrix}1&14.53997&0.092489\\ 0&1&-0.001112308\end{bmatrix}$$ Subtract $M_{12}=14.53997$ times the second row from the first $$M=\begin{bmatrix}1&0&0.108661\\ 0&1&-0.001112308\end{bmatrix}$$ So our estimate of the error in our root is the last column $$\vec\epsilon=\begin{bmatrix}0.108661\\-0.001112308\end{bmatrix}$$ Subtract our error estimate from the current value of $\vec x$ to get our next value of $\vec x$ $$\vec x\leftarrow\vec x-\vec\epsilon=\begin{bmatrix}1.1193\\1.9988\end{bmatrix}- \begin{bmatrix}0.108661\\-0.001112308\end{bmatrix}=\begin{bmatrix}1.010639\\1.999912\end{bmatrix}$$ And repeat as necessary until the residuals and errors are small enough.

0
On

The Maple command fsolve will utilize Newton's method for such multivariate problems.

You can supply an initial point, or let it try to find one that converges (the default).

restart;

kernelopts(version);

    Maple 2019.2, X86 64 LINUX, Oct 30 2019, Build ID 1430966

eqs := { 4^x + 7^y = 53,
         3^x + 8^y = 67 }:

fsolve( eqs );

           {x = 1.000000000, y = 2.000000000}

If you set infolevel[fsolve]:=2: then it will also display details of the computation. Set that back to zero (default) to suppress such details.

You can also supply ranges for the variables, etc.

There are additional numeric rootfinding facilities. It often helps to not try and use the symbolic solver solve for purely numeric root-finding.

0
On

Assuming that we are in the real domain with all positive (larger than $1$) real coefficients, for two equations such as $$a_1^x+b_1^y=c_1 \tag 1$$ $$a_2^x+b_2^y=c_2 \tag 2$$ what you can do is to eliminate $y$ from the first equation $$a_1^x+b_1^y=c_1 \implies y=\frac{\log \left(c_1-a_1^x\right)}{\log (b_1)}\tag 3$$

We also know that $x$ has an upper bound which is

$$ x_{max}=\min \left(\frac{\log (c_1)}{\log (a_1)},\frac{\log (c_2)}{\log (a_2)}\right)$$

and this is good piece of information for later.

Plug in $(2)$ to get a function of $x$ only. That is to say that you look for the zero of function $$f(x)=a_2^x+b_2^{y(x)}-c_2$$ and, now, you can use Newton method for a single variable starting iterations with, say, $x_0=\frac 12 x_{max}$

Let us do it for your example for which the iterates are $$\left( \begin{array}{cc} n & x_n \\ 0 & 1.43198 \\ 1 & 1.11721 \\ 2 & 1.01019 \\ 3 & 1.00008 \\ 4 & 1.00000 \end{array} \right)$$ Once you have $x$, go back to $(3)$ to get $y=2$.

Let us make it more difficult with the same coefficients except that $c_1=500$ and $c_2=700$; the iterates are $$\left( \begin{array}{cc} n & x_n \\ 0 & 2.24145 \\ 1 & 3.33178 \\ 2 & 3.00573 \\ 3 & 2.90125 \\ 4 & 2.89252 \\ 5 & 2.89247 \end{array} \right)$$ and then $y=3.13363$.