When using global optimization tools such as Python's scipy or Mathematica's fmin, I was informed that the results returned from the tool may not be accurate since the problem of finding global minimum/maximum seems to be intractable.
Now, I would like to know whether there is a generic theory in statistics/probability that quantifies the confidence for the maxima/minima these kinds of tool produce. For example, if I use an Monte Carlo algorithm combined with Newton's gradient search, of how much probability can I claim the minimum found by the algorithm? Isn't there a theory studying this?
[EDITS] If there is no generic theory, maybe there is one that works for some category of functions? The difficulty for me is that my functions are computer programs, so linearity of convexity make little sense for my cases. Still I would like to learn about theoretical results for some specific categories of functions.
Suppose you are trying to minimize an arbitrary function on a discrete set with $M$ elements, using $N$ random trials (Monte Carlo) and declaring the smallest result to be the minimum. The probability that this is the correct minimum is $\frac{N}{M}$.
Obviously this may be arbitrarily small.