Everytime I have to work with chemical kinetics, I end up with having to deal with a system of two equations involving the unknowns being on the exponents as follows:
Let's say:
$$\cases{4^x+7^y=53\\ 3^x+8^y=67}$$
Typically when I have a system of linear equations such as
$$\cases{ax+by=c\\ a_1x+b_1y=c_1}$$
I could use Gauss Jordan or any of the methods involving matrices or whatsoever.
However when the unknowns appear in the exponential makes it unable to solve with regular methods covered in algebra.
Therefore what would be the recommended numerical method to solve this with a good degree of accuracy to the exact answer?.
The kind of method which I'm looking for is something which doesn't involve much iterations and can be easier to apply each time such situation event (two unknowns as in this context) arises.
I'll appreciate that an answer could use my example and be the most detailed as possible.
Can maple be used on this using Newton Raphson?
Since I use maple, my intention is that a suggested answer could indicate a short sequence or algorithm which can be input in maple to solve this kind of situations other than using the command solve.
The vector version of Newton's method goes like this: suppose you have $n$ equations in $n$ unknowns where the unknowns are $x_1, x_2, \dots, x_n$ and the equations are $$f_i(x_1, x_2, \dots, x_n)=0$$ for $i=1, 2, ..., n$. Then the linear approximation to each equation is $$\begin{align}f_i(r_1, r_2, \dots, r_n)&=f_i((r_1-x_1)+x_1,(r_1-x_2)+x_2,\dots,(r_n-x_n)+x_n)\\ &\approx f_i(x_1, x_2, \dots, x_n)+\sum_{j=1}^n(r_j-x_j)\frac{\partial}{\partial x_j}f_i(x_1, x_2, \dots, x_n)=0\end{align}$$ Where $x_1=r_1,x_2=r_2,\dots,x_n=r_n$ is the true solution the the system of equations. If we construct the vector $\vec b$ where $b_i=f_i(x_1, x_2, \dots, x_n)$ and the matrix $M$ where $M_{ij}=\frac{\partial}{\partial x_j}f_i(x_1, x_2, \dots, x_n)$ then we can solve for $$M(\vec x-\vec r)=M\vec y=\vec b$$ Then our next approximation to the solution is $x_j\leftarrow x_j-y_j$ Here is a Matlab example for your posted problem
Output is:
It took a while to get going, but once it gets close Newton's method is generally quadratically convergent.
EDIT: Let's do one Newton-Raphson iteration by hand. Start with $$\vec x=\begin{bmatrix}1.1193\\1.9988\end{bmatrix}$$ Evaluate the residual $$\vec f\left(\vec x\right)=\begin{bmatrix}4^{x_1}+7^{x_2}-53\\ 3^{x_1}+8^{x_2}-67\end{bmatrix}=\begin{bmatrix}4^{1.1193}+7^{1.9988}-53\\ 3^{1.1193}+8^{1.9988}-67\end{bmatrix} =\begin{bmatrix}0.605103\\0.260622\end{bmatrix}$$ And the Jacobian $$J=\begin{bmatrix}4^{x_1}\ln4&7^{x_2}\ln7\\ 3^{x_1}\ln3&8^{x_2}\ln8\end{bmatrix}= \begin{bmatrix}4^{1.1193}\ln4&7^{1.9988}\ln7\\ 3^{1.1198}\ln3&8^{1.9988}\ln8\end{bmatrix}= \begin{bmatrix}6.542462&95.12721\\ 3.75739&132.7526\end{bmatrix}$$ Now we have to solve $J\vec\epsilon=\vec f$ we form the augmented matrix and perform Gaussian elimination. $$M=\begin{bmatrix}6.542462&95.12721&0.605103\\ 3.75739&132.7526&0.260622\end{bmatrix}$$ Divide the first row by $M_{11}=6.542462$ $$M=\begin{bmatrix}1&14.53997&0.092489\\ 3.75739&132.7526&0.260622\end{bmatrix}$$ Subtract $M_{21}=3.75739$ times the first row from the second $$M=\begin{bmatrix}1&14.53997&0.092489\\ 0&78.12024&-0.08689\end{bmatrix}$$ Divide the second row by $M_{22}=78.12024$ $$M=\begin{bmatrix}1&14.53997&0.092489\\ 0&1&-0.001112308\end{bmatrix}$$ Subtract $M_{12}=14.53997$ times the second row from the first $$M=\begin{bmatrix}1&0&0.108661\\ 0&1&-0.001112308\end{bmatrix}$$ So our estimate of the error in our root is the last column $$\vec\epsilon=\begin{bmatrix}0.108661\\-0.001112308\end{bmatrix}$$ Subtract our error estimate from the current value of $\vec x$ to get our next value of $\vec x$ $$\vec x\leftarrow\vec x-\vec\epsilon=\begin{bmatrix}1.1193\\1.9988\end{bmatrix}- \begin{bmatrix}0.108661\\-0.001112308\end{bmatrix}=\begin{bmatrix}1.010639\\1.999912\end{bmatrix}$$ And repeat as necessary until the residuals and errors are small enough.