Optimization with changing objective function

807 Views Asked by At

Is there any theory about (convex) optimization where the objective function is allowed to change during the optimization process? I have a problem where the objective function depends on some environmental variables that changes over time.

I came across Reinforcement Learning and Stochastic Function Approximation which is not quite what I am looking for.

Edit: I think I should give an example to clarify. Assume I want to calculate the "Smallest Enclosing Balls" of a set of points using stochastic gradient descent. Now time passes and new points are added to the set. I might be practical to use some "incremental" optimization algorithm and do not start from scratch.

1

There are 1 best solutions below

0
On

You should see the optimization algorithm as a black box that gives you the optimal solution for a fixed objective.

If the solving time of the optimization algorithm is slower than the change-rate of the objective function (for example in a real-time application) -- This issue usually happens with non-convex optimization problems that are usually NP-hard (e.g. integer programs) -- then you should use time-bound optimization algorithms.

For example, your optimization algorithm may need one hour to find the optimimum solution, but you need some (approximate) answer in less than a minute, then you can set a time limit for finding the best approximate solution.

Some commercial solvers have this capability at this point (Gurobi solver is a good example.)

I hope this helps!