Let's say I know the position $e \in \mathbb{R}^N$ of a local extremum point of a non-convex function $f: \mathbb{R}^N \mapsto \mathbb{R}$. Is there an efficient method for finding $K$ closest "neighbouring" extremum points or all extremum points within a specific radius from the extremum point $e$? One option would be exhaustive, brute-force iterative optimization using gradient descent or Gauss-Newton, but is there something more sophisticated?
I feel like a weaker version of this requirement would be to find the shape of the boundary of the basin of convergence (for iterative gradient descent optimization) for the extremum point $e$, and then stepping in different directions across (and outward of) the boundary, attempting to find the neighbouring extremum points.
If one would succeed, the result would be something like a Voronoi partition of the domain of the function $f(\cdot)$, where the points $e_i$ of the Voronoi tessellation are the local extrema of the function $f(\cdot)$, and each basin of convergence for a point $e_i$ would be a Voronoi cell.
Connecting the neighbouring Voronoi points would give us a graph $\mathcal{G}$, which could be used for finding nearest-neighbouring extrema for a given local extremum. Does there exist a method for computing such graph? Or at least a method that would uncover a local subgraph of the graph $\mathcal{G}$, say 1-ring neighbourhood for a given extremum point $e_i$?
My motivation is optimization: there are multiple starting points that will converge to the same local extremum when using gradient descent or Gauss-Newton, so why not come up with a transformation that would "convert" the function $f(\cdot)$ into a local graph of connected extrema? We could then explicitly check which of the extremum points is the optimal one suitable for our task.