Golden Section Search termination condition

251 Views Asked by At

From textbooks I found that the tolerance for Golden Section Search method should be set to $\sqrt{\epsilon}$, where $\epsilon$ - is the machine epsilon. This can be derived from Taylor series. So, in other words, setting tolerance less then $\sqrt{\epsilon}$ is useless, since the computer wouldn't differ the values, whose differences are within that tolerance.

In my program, I used double format ( $\epsilon = 2.2*10^{-16}$), meaning for tolerance $\approx 10^{-8}$. However, when I tried to use tolerance levels like $10^{-11}$ and $10^{-10}$, I got two corresponding to those tolerances abscissas $x1$ and $x2$ for minimum of the same function. But |x1 - x2| turned out to be $10^{-12}$, which is not even close to machine epsilon. Could you please explain why is that? According to textbooks the difference should be on the order of machine epsilon. Thanks in advance. Here is my full program.

#include<iostream>

#include<iomanip>

#include<cmath> using namespace std;

double function(double x) { const double bohrRadius = 5.29e-11; const double C = 1 / (81 * sqrt(3 * M_PI)) * pow(1/bohrRadius , 2/3); return C * (27 - 18 * x + 2 * x * x) * exp(-x / 3); }

void bracket(double (*func)(double x), double x0, double step, double &xleft, double &xright) {

double f0, f1, f2; double x1, x2; double factor = 1.5; bool FLAG = false; f0 = func(x0); x1 = x0 + step; f1 = func(x1); if (f1 > f0) { step = - step; x1 = x0 + step; f1 = func(x1); if (f1 > f0) { xleft = x1; xright = x0 - step; return; } } do { step = factor * step; x2 = x1 + step; f2 = func(x2); if (f2 > f1) { xleft = x2; xright = x0; FLAG = true; return; } x0 = x1; x1 = x2; f0 = f1; f1 = f2; }while (!FLAG); }

double findExtremum(double (*func)(double x), double a, double b, double tol) {

double R = 0.618033989; double f1, f2; double x1, x2; x1 = b - R * (b - a); x2 = a + R * (b - a); f1 = func(x1); f2 = func(x2);

do {

if (f1 > f2) {
  a = x1;
  x1 = x2;
  f1 = f2;
  x2 = a + R * (b - a);
  f2 = func(x2);
}
else {

  b = x2;
  x2 = x1;
  f2 = f1;
  x1 = b - R * (b - a);
  f1 = func(x1);
}

}while ( abs(x1 - x2) > tol);

return (x1 + x2)/2; }

int main() { double Xmin1, Xmin2; double xleft = 0; double xright = 0; double step = 1; double initialGuess = 2; bracket(function, initialGuess, step, xleft, xright); if (xleft < xright) { Xmin1 = findExtremum(function, xleft, xright, 1e-10); Xmin2 = findExtremum(function, xleft, xright, 1e-11); double diff = Xmin1 - Xmin2; cout << diff << endl; } else { Xmin1 = findExtremum(function, xright, xleft, 1e-10); Xmin2 = findExtremum(function, xright, xleft, 1e-11); double diff = Xmin1 - Xmin2; cout << diff << endl; } }

1

There are 1 best solutions below

3
On BEST ANSWER

If you are trying to find the minimum, the point isn't that the $x$ values will be too close, but that the $y$ values will be. Let $f(x)=1+(x-\pi)^2$ If we try to minimize this, the minimum will be at $(\pi,1)$, but for $x$ values between $\pi - \sqrt \epsilon$ and $\pi + \sqrt \epsilon$ the function value will be indistinguishable from $1$ In your problem, try taking $f(x1)$ and $f(x2)$ and see how close they are. This also gives a clue why we say "of order $\sqrt \epsilon$. If we had $f(x)=1+10^{-10}(x-\pi)^2$ the uncertainty in $x$ would be much larger. If we had $f(x)=(x-\pi)^2$ we could find the minimum to great accuracy.