Do Modern Optimization Algorithms "Ignore" Duality?

42 Views Asked by At

In traditional optimization problems, the idea of "duality" was of great importance - from a Linear Programming perspective, "duality" allows us to potentially simplify solving complicated constraint optimization problems by converting them into a different form.

My Question: As advancements in computers have taken place over the years, have we begun to "ignore" the "duality" concept in optimization problems?

For example, in the context of Machine Learning - to prevent Machine Learning models from "Overfitting", sometimes we add a L1-Norm or an L2-Norm "penalty constraint" to the objective function (corresponding to the machine learning model) we are trying to optimize. This is referred to as "Regularization":

enter image description here

In these cases, the regularization penalty terms are added to the objective function, and the objective function is then directly (without exploring "duality") optimized using (some variant of the) Gradient Descent Algorithm:

enter image description here

Thus, it would appear as though nowadays we are less interested in studying the "duality" aspect of the optimization problem, and we are directly optimizing the objective function using powerful computers and Gradient Descent.

Can someone please comment on this?

Thanks!