General Results for Global Minimizing Only If $f$ is Convex Function on $D$?

46 Views Asked by At

From a textbook:

All local minimizers of a function f are candidates for global minimizing, but obviously, an arbitrary function may not realise a global minimum in an open set D. For instance, look at $f(x) = −x^2$ subject to $x ∈ D = (−1, 1)$.

I don't really get the message of it. If we allow for all possible real numbers, I understand that we don't realise a global minimum. We can easily see that if we plot $f(x) = −x^2$.

Does $x ∈ D = (−1, 1)$ mean that we allow x i.a. to be -1 and 1. Then we would have two minimums, right? Or is meant, that we allow for example for 0.999999 and -0.999999 but excluding the boundaries 1 and -1? Then again it would be clear, because we can always imagine infinitely close numbers to 1 and -1.

The text then carries on that there are only general results for global minimizing in the case where f is a convex function on D and that we define convexity of the function f by the inequality: $f(tx + (1-t) y) ≤ t f(x) + (1-t) f(y)$ for all x,y ∈ D and all t ∈ [0,1]. All points tx + (1-t)y should lie in D. Hence D must be a convex set.

I cannot really follow this argumentation. Can someone explain the connection between the example and why D must be a convex set?

2

There are 2 best solutions below

0
On BEST ANSWER

The point of the example $-x^2$ is precisely that the end points are excluded. You are correct in reasoning that we can take .9, .99, .999, etc and get as close as we would like to the point 1.

The point of the example with $D$ is that for convex functions on convex domains we can ensure local optimality is equivalent to global optimality. If the set $D$ is not convex we cannot ensure this anymore, you can imagine $f(x)=\begin{cases}x^2 & x\in [-1,1]\\ (x-5)^2-1 & x\in [4,6]\end{cases}$ which has local minima at $x=0$ and $x=5$ but $x=0$ is not a global minimum.

It's also weird to talk about the convexity of a function on a set that's not convex because you evaluate the function at convex combinations of points in the inequality.

0
On

The point is that you can always find an $x_\alpha$ nearer to $\pm 1$ that gives a more minimal $f(x)$ than all $-|x_\alpha|<x<|x_\alpha|$ ("further inside the interval") because the boundary points don't lie within the interval. That is to say, for example, you can always move closer to a boundary point by halving the distance you already are from the boundary point, and you will never reach the boundary point in this process (the minimum will never be realised). Even if the boundary points of $D$ were included in $D$, the point giving the global minimum here would be non-unique.

The definition of convexity is showing that you can draw a line between two function values (at $x\in D$ and $y\in D$) of a convex function and it is guaranteed that the function value at any point (located by $t$) is "below" the line ("less than" the linearly interpolated function value at that point defined by $t$)