Constraints in optimization; redundant hardness?

72 Views Asked by At

This is not an accurate mathematical problem, and rather a philosophical and ambitious question.

As far as I know, unconstrained problems are easier than constrained problems; right? This is mostly because adding constraints creates complicated structure to feasible space.

Unlike the current status of technology, don't you think this should be the other way around? I mean, when you add constraints, (usually) you remove considerable space of feasible solutions (though adding some structure to feasible space).

This makes me to have serious doubts about the current trend of optimization models. The models are designed in a way to be simple for unconstrained problems, while they should have been designed to be simple for structured problems.

2

There are 2 best solutions below

0
On

I don't exactly know what you mean by structured problems(because the term "structured" has some specific meaning in many fields); but as long as it's related to the constraints they are not problematic these days, In particular if your constraints are convex.

If your objective function is not convex, then it is very likely that your relaxed (unconstrained) problem is NP-hard any ways.

I refer you to interior-point method algorithm to see how you can easily get rid of the constraints.

0
On

I think you can not say that "unconstrained problems are easier than constrained problems" or that this claim is always true. There is a price to pay to constraint the search space and whenver the space is constraint, it is possible that the constrained space, as a subpsace of the unconstrained one, is on average a much more difficult space for searching.