unstable optimizer, stable objective

140 Views Asked by At

I am trying to minimize a convex objective numerically using gradient descent. I select the starting point randomly. I repeat the experiment multiple times. The optimal objective value I get each time is quit the same, but the minimizer is very different. Is it natural? How should it be handled in experiments?

1

There are 1 best solutions below

4
On BEST ANSWER

Yes, this is entirely possible. The fact that a problem is convex just guarantees that any local minimum is a global minimum. You can have many points that reach this global minimum. Intuitively, you can think of a function with a "flat" bottom, somewhat like a bath tub, so two points can be far away but still be minimal.