Optimality guarantees of SGD convergence in Geometric Programming

109 Views Asked by At

What guarantees of optimality do we get when minimizing with Stochastic Gradient Descent a problem in its original formulation, after showing that it is a Geometric Programming instance (i.e. can be formulated as Convex Optimization when applying logarithmic transformation to the variables)?