Optimal Value of a Cost Function as a Function of the Constraining variable

383 Views Asked by At

Consider the optimization problem :

$ \textrm{min } f(\mathbf{x}) $

$ \textrm{subject to } \sum_i b_ix_i \leq a $

Using duality and numerical methods (with subgradient method) i.e.

$d = \textrm{max}_\lambda \{ \textrm{inf}_x ( f(\mathbf{x}) - \lambda(\sum_i b_ix_i - a)) \}$ .

we can obtain the optimal cost.

Now I want to express $d$ as a function of $a$ to calculate the optimal cost as a function of the constraining variable. How should I know if $d = d(a)$ is convex / concave in $a$ , if $f$ is convex and $x$ is in a convex set and the problem fullfils Slater conditions and so on? Does anyone know where can I find some theory about expressing the dual function as a function of the constraining variable and the properties of this function definition?

Kind regards

2

There are 2 best solutions below

1
On BEST ANSWER

It could perhaps be useful to know that you are essentially asking about properties of the value function in parametric programming, or multi-parametric programming to be completely general.

https://en.wikipedia.org/wiki/Parametric_programming

And regarding convexity, the answer is yes, see e.g.,

Convexity and Concavity Properties of the Optimal Value Function in Parametric Nonlinear Programming A. V. FIACCO 3 AND J. KYPARISIS 4

https://apps.dtic.mil/dtic/tr/fulltext/u2/a138202.pdf

2
On

I'm definitely not an expert, but I'll give this a shot (maybe this should be a comment rather than a full answer). Definitely take this answer with a grain of salt.

Consider the following function: $p(a)=\inf_{x}F(x,a)$ where $F(x,a)=\begin{cases} f(x) & \text{when }\sum_{i}b_{i}x_{i}\leq a\\ \infty & \text{otherwise} \end{cases}$

The function $p$ is convex, so I think that answers 1 of your questions. For a proof, see proposition 3.3.1 page 122 Bertsekas Convex Optimization Theory (http://www.mit.edu/~dimitrib/convexduality.html).

As far as expressing the optimal value of original program in terms of $a$, consider this the following. All dual optimal solutions are in the subdifferential of $p$. (for a proof, see Bertsekas Convex Optimization Theory page 187-190 section 5.4.1-5.4.2). Suppose for a moment that $p$ is differentiable. Then then the (unique) dual optimal solution $\mu^{\star}$ equals $\nabla p(a)$. Thus the dual optimal solution tells you how the optimal cost changes locally as a function of $a$. I think that is your 2nd question.

If you want to know how the optimal dual value (i.e. the value of the dual function at its optimal level) changes with $a$, that follows using strong duality. (Sounds like you're assuming strong duality so I will too. you need don't too much else besides convexty of $f$ to guarantee it--I think you also need closedness and properness). By strong duality the optimal dual value is equal to the optimal cost (of the primal problem), so now you know how that changes with $a$. If $p$ is not differentiable, I guess it's a bit more complicated since all you know is that the dual optimal solutions are subgradients of $p$ at $a$. I think that should still give some useful information though I haven't thought about it.