I was playing around with Desmos today and came up with this formula for finding global minima:
It works by descending until it finds a negative value. The same method could work with multiple variables using nested products. Would that be a computationally feasible method to use in machine learning?

As @littleO pointed out, it is really not clear what your method/algorithm is. Trying to interpret your image, my guess is that you are doing the following:
If this is correct, then there are some flaws. First, assuming $g_{n-1}(x)=\prod_{i=1}^{n-1}(f(x)-i) < 0$ has no solutions, then checking if $g_{n}(x)=\prod_{i=1}^{n}(f(x)-i) < 0$ is the same as checking if $f(x)-n < 0$ has solutions (which is in turn equivalent to checking if $f(x) < n$ has solutions). So if I have interpreted your algorithm correctly it ultimately boils down to linearly checking "is 0 the minimum?$, "is 1 the minimum?", "is 2 the minimum?", etc. But the minimum value of the function need not be a natural number. So you would have to modify it to handle real numbers (including negative reals).
The bigger issue is you need a method for actually finding solutions to $f(x) < n$. Your post gives no indication of how you would do that, essentially ignoring all of the difficulty of optimization.