Are there more reliable formulas for finding global minima?

75 Views Asked by At

I was playing around with Desmos today and came up with this formula for finding global minima:

enter image description here

It works by descending until it finds a negative value. The same method could work with multiple variables using nested products. Would that be a computationally feasible method to use in machine learning?

1

There are 1 best solutions below

0
On BEST ANSWER

As @littleO pointed out, it is really not clear what your method/algorithm is. Trying to interpret your image, my guess is that you are doing the following:

  1. Start with $n=0$ and $g_0(x) = f(x)$
  2. Increase $n$ by 1 (ie $n \leftarrow n+1$)
  3. Let $g_n(x) = g_{n-1}(x)(f(x)-n)$
  4. Check if $g_n(x) < 0$ has solutions
  5. If so, the solution set is a union of neighbourhoods of gloabl minimizers, and the global minimum, $m$ satisfies $n-1 \leq m < n$.
  6. Else, go to step 2.

If this is correct, then there are some flaws. First, assuming $g_{n-1}(x)=\prod_{i=1}^{n-1}(f(x)-i) < 0$ has no solutions, then checking if $g_{n}(x)=\prod_{i=1}^{n}(f(x)-i) < 0$ is the same as checking if $f(x)-n < 0$ has solutions (which is in turn equivalent to checking if $f(x) < n$ has solutions). So if I have interpreted your algorithm correctly it ultimately boils down to linearly checking "is 0 the minimum?$, "is 1 the minimum?", "is 2 the minimum?", etc. But the minimum value of the function need not be a natural number. So you would have to modify it to handle real numbers (including negative reals).

The bigger issue is you need a method for actually finding solutions to $f(x) < n$. Your post gives no indication of how you would do that, essentially ignoring all of the difficulty of optimization.