Let $a<x<y<b$, and suppose that the continuous function $f$ is unimodal (has one weak local maximum) on $[a, b]$. If $f(x) \geq f(y)$, then for any $z \in (y,b]$, we must have $f(y)>f(z)$, or else $f$ would have distinct local maxima on either side of $y$. Therefore $f$ must be maximized on the interval $[a, y]$. Likewise, if $f(y) < f(x)$, then $f$ is maximized on $[x, b]$.
This lends itself to a derivative-free, linearly convergent optimization algorithm. Suppose we wish to maximize $f$ over the interval $[a, b]$. If we pick any $x<y$ on $(a, b)$, after evaluating $f(x)$ and $f(y)$, we can use the above to confine our interest to either $[a, y]$ or $[x, b]$. Continuing recursively, we get a decreasing, nested sequence of intervals on which $f$ is maximized. If we always choose $x=2a/3 + b/3$ and $y=a/3+2b/3$, then we can guarantee that each interval is $2/3$ the length of its successor. Hence the midpoints of the intervals converge linearly to the maximizer of $f$ with rate $2/3$.
This algorithm reminds me of the bisection method for root finding. Does it have a name?