From a textbook:
All local minimizers of a function f are candidates for global minimizing, but obviously, an arbitrary function may not realise a global minimum in an open set D. For instance, look at $f(x) = −x^2$ subject to $x ∈ D = (−1, 1)$.
I don't really get the message of it. If we allow for all possible real numbers, I understand that we don't realise a global minimum. We can easily see that if we plot $f(x) = −x^2$.
Does $x ∈ D = (−1, 1)$ mean that we allow x i.a. to be -1 and 1. Then we would have two minimums, right? Or is meant, that we allow for example for 0.999999 and -0.999999 but excluding the boundaries 1 and -1? Then again it would be clear, because we can always imagine infinitely close numbers to 1 and -1.
The text then carries on that there are only general results for global minimizing in the case where f is a convex function on D and that we define convexity of the function f by the inequality: $f(tx + (1-t) y) ≤ t f(x) + (1-t) f(y)$ for all x,y ∈ D and all t ∈ [0,1]. All points tx + (1-t)y should lie in D. Hence D must be a convex set.
I cannot really follow this argumentation. Can someone explain the connection between the example and why D must be a convex set?
The point of the example $-x^2$ is precisely that the end points are excluded. You are correct in reasoning that we can take .9, .99, .999, etc and get as close as we would like to the point 1.
The point of the example with $D$ is that for convex functions on convex domains we can ensure local optimality is equivalent to global optimality. If the set $D$ is not convex we cannot ensure this anymore, you can imagine $f(x)=\begin{cases}x^2 & x\in [-1,1]\\ (x-5)^2-1 & x\in [4,6]\end{cases}$ which has local minima at $x=0$ and $x=5$ but $x=0$ is not a global minimum.
It's also weird to talk about the convexity of a function on a set that's not convex because you evaluate the function at convex combinations of points in the inequality.