Finding roots of a $3$-dimensional sinusoidal function

118 Views Asked by At

I have a system of three sinusoidal equations in three dimensions that I'm trying to find the root of. I have gone down a wormhole with Newton-Raphson, Brent-Decker, and Bisection methods of numerical solving. This has just made me more confused. I also have had trouble finding examples of these methods in multiple dimensions.

Here are the equations; $k_n$ are all known constants and I want the root in the range $\alpha,\beta,\gamma \in [-90,90],$

$$ \begin{align} 0=\;&k_1\,\sin\left(\alpha \right)\,\sin\left(\beta \right)\,\sin\left(\gamma \right) \\\\ +& k_2\,\sin\left(\alpha \right)\,\sin\left(\beta \right)\,\cos\left(\gamma \right) \\\\ +& k_3\,\sin\left(\alpha \right)\,\sin\left(\gamma \right) \\\\ +& k_4\,\sin\left(\alpha \right)\,\cos\left(\gamma \right) \\\\ +& k_5\,\cos\left(\alpha \right)\,\sin\left(\beta \right)\,\sin\left(\gamma \right) \\\\ +& k_6\,\cos\left(\alpha \right)\,\sin\left(\beta \right)\,\cos\left(\gamma \right) \\\\ +& k_7\,\cos\left(\alpha \right)\,\sin\left(\gamma \right) \\\\ +& k_1\,\cos\left(\alpha \right)\,\cos\left(\gamma \right) \\\\ \\\\ 0=\;&k_8\,\sin\left(\alpha \right)\,\sin\left(\gamma \right) \\\\ +& k_9\,\sin\left(\alpha \right)\,\cos\left(\gamma \right) \\\\ +& k_{10}\,\cos\left(\alpha \right)\,\sin\left(\beta \right)\,\sin\left(\gamma \right) \\\\ +& k_8\,\cos\left(\alpha \right)\,\sin\left(\beta \right)\,\cos\left(\gamma \right) \\\\ +& k_{11}\,\cos\left(\beta \right)\,\sin\left(\gamma \right) \\\\ +& k_7\,\cos\left(\beta \right)\,\cos\left(\gamma \right) \\\\ \\\\ 0=\;&k_9\,\sin\left(\alpha \right)\,\sin\left(\beta \right)\,\sin\left(\gamma \right) \\\\ +& k_{12}\,\sin\left(\alpha \right)\,\sin\left(\beta \right)\,\cos\left(\gamma \right) \\\\ +& k_8\,\cos\left(\alpha \right)\,\sin\left(\gamma \right) \\\\ +& k_9\,\cos\left(\alpha \right)\,\cos\left(\gamma \right) \\\\ +& k_4\,\cos\left(\beta \right)\,\sin\left(\gamma \right) \\\\ +& k_{13}\,\cos\left(\beta \right)\,\cos\left(\gamma \right) \end{align} $$

My questions are:

  • What's the best way to solve for the three unknowns?
  • Are there good resources on using the methods on multiple dimension?
  • And, is there a good cpp library I should use?
2

There are 2 best solutions below

0
On

I have programmed with Matlab (see program below) a system similar to yours (but simpler) with Newton-Raphson :

$$V_{n+1}=V_n-J_n^{-1}*F(V_n) \ \text{with} \ V_n=\pmatrix{\alpha_n\\ \beta_n \\ \gamma_n} \tag{1}$$

$J_n$ being the jacobian matrix, under the condition that the seed $V_0$ is not too far from a solution.

In certain cases like the following one, where the initial values belongs to a certain domain :

$$V_0=(\alpha_0, \beta_0, \gamma_0)=(0,0,0)+(rand,rand,rand)\tag{2}$$

(the perturbations are governed by the addition of random numbers in $[0,0.1]$),

one has a convergence always towards the same triple :

$$(\alpha, \beta, \gamma) \approx (-0.8019857432145809, 0.01683195562605918, -94055225964508) \tag{2}$$

One gets the same convergence with the same solution as in (2) when one starts from a value of the following type :

$$V_0=(\alpha_0, \beta_0, \gamma_0)=(\underbrace{-\pi/6}_{-30°},\underbrace{\pi/12}_{-15°},\pi/12)+(rand,rand,rand)$$

Here is the Matlab program.

Please note that I haven't implemented the true jacobian matrix with partial derivatives but an approximation of it.

h=0.000001; % small increment
format long;
k1=1;k2=0.5;k3=0.5;
k8=0.2;k9=0.1;k12=0.9;
fx=@(a,b,c)(k1*cos(a)*sin(b)+k2*cos(a)*cos(c)+k3*sin(a)*cos(c));
fy=@(a,b,c)(k8*cos(a)*cos(c)+k9*sin(a)*cos(b)+k1*cos(b)*sin(c));
fz=@(a,b,c)(k9*sin(a)*sin(b)*cos(c)+k12*cos(a)*sin(b)*cos(c)+k8*cos(a)*sin(c));
% initial angular values -30°,+15°,+15° slightly perturbated :
V=[-pi/6,pi/12,pi/12]'+0.1*rand(3,1);
for k=1:20
   a=V(1);b=V(2);c=V(3);
J1=[fx(a,b,c),fx(a,b,c),fx(a,b,c)
    fy(a,b,c),fy(a,b,c),fy(a,b,c)
    fz(a,b,c),fz(a,b,c),fz(a,b,c)];
J2=[fx(a+h,b,c),fx(a,b+h,c),fx(a,b,c+h)
    fy(a+h,b,c),fy(a,b+h,c),fy(a,b,c+h)
    fz(a+h,b,c),fz(a,b+h,c),fz(a,b,c+h)];
    J=(J2-J1)/h;
    V=V-inv(J)*[fx(a,b,c),fy(a,b,c),fz(a,b,c)]'
end;
[fx(a,b,c),fy(a,b,c),fz(a,b,c)], % should be approx. [0,0,0]

(this program can be written in a different way ; I have chosen this way because it is readable as such).

But different other cases may happen. Here is one which looks to be incoherent compared to the previous one, with this family of seeds :

$$V=(\pi/3,\pi/4,\pi/4)+ \ \text{same perturbation as above}$$

In this case, running several times the program gives each time an apparently different limit :

$$(\alpha, \beta, \gamma) \approx (-4.712388980384690, 1.570796326794897, 4.712388980384690)$$

$$(\alpha, \beta, \gamma) \approx (54.97787143782138,-1.570796326794897,-1.570796326794897)$$

$$(\alpha, \beta, \gamma) \approx (17.27875959474386,1.570796326794897,-1.570796326794897)$$

but a common feature appears : in fact, the results are multiples of $\pi/2$ (I must say : with some weird exceptions that I haven't fully understood).

(for example the first one is $(-3\pi/2, \pi/2, 3\pi/2)$, the second one is $(35\pi/2, -\pi/2, -\pi/2)$, etc.)

2
On

Making

$$ \cases{ s_a = \sin\alpha\\ c_a = \cos\alpha\\ s_b = \sin\beta\\ c_b = \cos\beta\\ s_g = \sin\gamma\\ c_g = \cos\gamma } $$

and introducing the additional equations

$$ \cases{ s_a^2+c_a^2-1=0\\ s_b^2+c_b^2-1=0\\ s_g^2+c_g^2-1=0 } $$

the trigonometric system is transformed into a polynomial one. After that using a minimization procedure we transform the initial system into a minimization problem. Calling $f_k(s_a,c_a,s_b,c_b,s_g,c_g)$ for $k = 1,\cdots,6$ we have now

$$ \{s_a,c_a,s_b,c_b,s_g,c_g\}^* = \arg\min_{s_a,c_a,s_b,c_b,s_g,c_g}\sum_k f_k^2(s_a,c_a,s_b,c_b,s_g,c_g) $$

and then

$$ \alpha^* = \arctan\left(\frac{s_a^*}{c_a^*}\right)\ \ \ \text{etc.} $$

A MATHEMATICA script to handle such problem

parms = Thread[{k1, k2, k3, k4, k5, k6, k7, k8, k9, k10, k11, k12, k13} -> RandomReal[{-2, 2}, 13]];
equs = {k1 sa sb sg + k2 sa sb cg + k3 sa sg + k4 sa cg + k5 ca sb sg + k6 ca sb cg, 
        k8 sa sg + k9 sa cg + k10 ca sb sg + k8 ca sb sg + k11 cb sg + k7 cb cg, 
        k9 sa sb sg + k12 sa sb cg + k8 ca sg + k9 ca sg + k4 cb sg + k13 cb cg, 
        ca^2 + sa^2 - 1, 
        cb^2 + sb^2 - 1, 
        cg^2 + sg^2 - 1};

equs0 = equs /. parms

NMinimize[equs0.equs0, {ca, sa, cb, sb, cg, sg}, Method -> "DifferentialEvolution"]