I am stuck with the integral $$ I(\epsilon,\lambda)=\frac1\epsilon\int\limits_0^{\frac\pi2} \frac{1+\left(\lambda\epsilon^4-1\right)\cos^2\phi}{\sqrt{1+\left(\epsilon^4-1\right)\cos^2\phi\,}} \,\text{d}\phi \,. $$ Note: In a first version, I unfortunately forgot the square root in the denominator.
Here, $0<\epsilon\leq1$ and $\lambda\geq 1$ are (real) parameters. Wolfram Alpha gives me the solution (if I didn't mistype something) $$ \frac{\epsilon^5}{\epsilon^4-1}\left[ \left(1-\lambda\right)K\!\left(1-\frac{1}{\epsilon^4}\right) + \left(\lambda\epsilon^4 -1\right)E\!\left(1-\frac{1}{\epsilon^4}\right)\right]$$ with elliptic integral of first and second kind. However, I have little experience with those, and I can't seem to get gnuplot to plot the result. (Possibly related to the conventions for the parameters of ellitic integral, $m$ vs $k^2$?)
Anyway, I don't particularly need specific values (which are presumably not expressible as elementary functions), but rather, I'm interested in the extrema of $I(\epsilon,\lambda)$:
- Specifically, I would like to find the extremum of $I(\epsilon,\lambda)$ as a function of $\epsilon$ with $\lambda$ treated as a parameter (i.e. the $\epsilon$ for which $\partial/\partial \epsilon \,I(\epsilon,\lambda)=0$).
- I have the hunch that for $\lambda=1$, the extremum should be at $\epsilon=1$ and it should be a minimum (by symmetry of the underlying problem). However, even that eludes me right now.
How can I make progress here?
What I have found is that $$I(\epsilon,\lambda)=\frac{\left(\lambda \epsilon ^4-1\right) E\left(1-\epsilon ^4\right)-(\lambda -1) \epsilon ^4 K\left(1-\epsilon ^4\right)}{\epsilon \left(\epsilon ^4-1\right)}$$
The partial derivative is
$$\epsilon ^2 \left(\epsilon ^4-1\right)^2\frac{\partial{I(\epsilon,\lambda)} }{\partial\epsilon}=$$ $$\epsilon ^4 K\left(1-\epsilon ^4\right) \left(3 \lambda +(\lambda -3) \epsilon ^4-1\right)+$$ $$\left(\epsilon ^4 \left(\lambda \left(\epsilon ^4-5\right)+5\right)-1\right) E\left(1-\epsilon ^4\right)$$
So, for the minimum value of $I(\epsilon,\lambda)$ we need to find for $\epsilon$ the zero of $$F(\epsilon)=\epsilon ^4 K\left(1-\epsilon ^4\right) \left(3 \lambda +(\lambda -3) \epsilon ^4-1\right)+$$ $$\left(\epsilon ^4 \left(\lambda \left(\epsilon ^4-5\right)+5\right)-1\right) E\left(1-\epsilon ^4\right)$$ which does not show explicit solutions.
However, for small values of $\epsilon$, an expansion gives $$F(\epsilon)=-1+\epsilon ^4 \left(\lambda (6\log (2)-5)+3(1-2 \lambda ) \log (\epsilon )+\frac{21}{4}-3\log (2)\right)+\frac{3}{64} \epsilon ^8 (32 \lambda -8 (4 \lambda +5) \log (2)+8 (4 \lambda +5) \log (\epsilon )-17)+O\left(\epsilon ^{12}\right)$$ Neglecting the second terms and letting $\epsilon^4=t$, this reduces to an equation which can be solved in terms of Lambert function and then an under estimate
$$\epsilon_* =\sqrt[4]{-\frac{4}{3 (2 \lambda -1) W_{-1}\left(-\frac{1}{12 (2\lambda -1)}e^{\frac{20 \lambda-21 }{3(2 \lambda-1) }}\right)}}$$ which will be better and better when $\lambda$ increases.
Using $\epsilon_*$ as the starting point of Newton method for solving $F(\epsilon_*)=0$, by Darboux theorem, we should face one overshoot of the solution as soon as $\lambda >4$.
Now, it is probably time for some calculations $$\left( \begin{array}{ccc} \lambda & \epsilon_* & I_{\text{min}} & \epsilon_{\text{min}} \\ 1.0 & 0.747537 & 1.57080& 1.000000 \\ 1.5 & 0.707936 & 1.90443& 0.818125 \\ 2.0 & 0.662338 & 2.15824& 0.713646 \\ 2.5 & 0.618508 & 2.36458& 0.645006 \\ 3.0 & 0.580530 & 2.53956& 0.595829 \\ 3.5 & 0.548739 & 2.69225& 0.558447 \\ 4.0 & 0.522167 & 2.82824& 0.528801 \\ 4.5 & 0.499737 & 2.95121& 0.504535 \\ 5.0 & 0.480557 & 3.06372& 0.484183 \\ 5.5 & 0.463945 & 3.16765& 0.466783 \\ 6.0 & 0.449389 & 3.26436& 0.451672 \\ 6.5 & 0.436501 & 3.35494& 0.438381 \\ 7.0 & 0.424986 & 3.44022& 0.426562 \\ 7.5 & 0.414615 & 3.52087& 0.415958 \\ 8.0 & 0.405208 & 3.59745& 0.406368 \\ 8.5 & 0.396623 & 3.67040& 0.397636 \\ 9.0 & 0.388744 & 3.74010& 0.389637 \\ 9.5 & 0.381477 & 3.80688& 0.382272 \\ 10.0 & 0.374746 & 3.87101& 0.375458 \end{array} \right)$$
You could easily make some empirical curve fit of the exact values of $\epsilon$ for generating better estimates for the iterative methods. $$\epsilon_0=1-\frac{0.457127 (\lambda-1)}{1+0.628628 (\lambda-1)}$$ could be a good starting point (at least for $1 \leq \lambda \leq 10$ - for which $R^2=0.999948$) .
Even with the poor estimate $\epsilon_0$ when $\lambda$ is small, using Newton, Halley or Householder methods does not make any problem. For example, for $\lambda=2$, the iterates are $$\left( \begin{array}{cccc} n & \text{Newton} & \text{Halley} & \text{Householder} \\ 0 & 0.662338 & 0.662338 & 0.662338 \\ 1 & 0.708735 & 0.712290 & 0.713415 \\ 2 & 0.713573 & 0.713645 & 0.713646 \\ 3 & 0.713645 & 0.713646 & \\ 4 & 0.713646 & & \end{array} \right)$$
It seems that something as simple as $$\epsilon_* \sim \frac{0.415681 \lambda+2.10534}{1.54228 \lambda+1}\qquad \qquad (R^2=0.999957)$$ will allow a very fast convergence of Newton method.