There are many interesting methods of searching for a global minimum of a complicated function of many variables, based on physical/biological analogies. For example, particle swarm optimization and evolutionary algortithms, both of which are stochastic and simulate behaviour of large populations (in this case, populations of solutions).
I've got an idea for another stochastic optimization method, based in physical reality. Let's say we need to minimize a function:
$$y=f(x_1,x_2,\dots,x_n)$$
Let's say the function is continuous and all of its partial derivatives are at least piecewise continuous.
Then let's consider the $(n+1)$-dimensional Cartesian space, with coordinates $$x_1,\dots,x_n,y$$
The idea is to simply turn on "gravity" along the $y$ coordinate and put a ball at a random position inside the region of $x_1,\dots,x_n$ we are searching on (making sure that it's high enough above the surface defined by $y=f(x_1,x_2,\dots,x_n)$, which we can do if the region is finite and the function is nice enough).
So we have three main parameters to set:
the "gravitation potential" $g$
the coefficient of restitution $c$
the initial height of the ball $y_0$ (or some interval of heights, if we want).
Then we just let the ball drop and bounce around on the surface according to the usual laws of classical mechanics, until it loses enough energy and settles down or the time limit is reached.
Then we write down the coordinates where it's settled and initiate another ball. It seems to follow from physical principles that eventually we find the global minimum of the function inside the region we considered as an average of the balls' final positions. (Note: as Brian Borchers pointed out in the comments, the best course of action would be just keeping the best result and using it).
My questions are:
Is this method doable? Are there any problems with the steps I suggested?
Is this method already used in some optimization schemes (or a close enough method)? If so, I would like some references, since I haven't been able to find anything myself.
The method you talk about reminds me gradient descent (GD).
This method is very good for convex problems. But consider cost functions such as De Jong 5 or Rastrigin with a lot of local minima. A single agent can barely find the global minima. This is why GA is popular for non-convex applications.
If you are interested, you can have a look at hybrid methods using both GA and GD [1], [2], [3], [4] (I personally have not read these papers).
Keep in mind, there is a trade-off between speed of convergence and the chance of falling in a local minima. There are so many methods proposed. None of them is better than GA in every aspect for general purpose but they have prones and cons.