Convex lower as opposed to upper bound?

729 Views Asked by At

Fundamental Question: Which of the options give useful surrogate for a non-convex function, the tightest convex lower or the tightest upper bound? (By useful, I mean with theoretical guarantees.)

I know that minimizing a lower bound, gives a lower bound on the minimum of the original objective. However I am not sure what can be argued about minimizing an upper bound.

Reason for the question is confusing observations I have observed:

  • Observation 1: Zero-one loss function is non-convex, so every one minimizes a tight "upper bound" instead such as hinge loss as a surrogate. Arguments tell it is a good choice.
  • Observation 2: Rank is not convex, so every one minimizes the tightest convex "lower bound" (the biconjugate, i.e. the trace norm; which is sum of singular values). Arguments tell it is a good choice.
  • Observation 3: There are cases where sum of a loss and a regularizer is optimized, where the former is an upper bound of an original loss and the second one, is the lower bound.