optimizing risk limit in stochastic optimization problem

55 Views Asked by At

Assume we want to solve stochastic optimization problem with chance constraints: $$ \underset{x}{\text{min}} \quad f(x) \\ \text{subject to}: P(x \leq b) \geq 1-\epsilon $$

where $b$ is a random variable $b=N\sim(\overline{b},\sigma)$. Also $\epsilon$ is called risk limit which is a predefined number (e.g. 0.05). Based on $\epsilon$, analytical reformulation or scenario approach has been employed to substitute chance constraints with deterministic constraints. $$ \underset{x}{\text{min}} \quad f(x) \\ \text{subject to}: x \leq \overline{b} - \sigma\Phi^{-1}(1-\epsilon) $$

However, how to choose this risk limit is important. the larger $\epsilon$ is, chance of violation would be smaller but cost of operation is higher.

By this consideration, I want to co-optimize the risk and cost as follow:

$$ \underset{x,\epsilon}{\text{min}} \quad f(x)+c\epsilon \\ \text{subject to}: x \leq \overline{b} - \sigma\Phi^{-1}(1-\epsilon) $$

I wonder how I can implement this as I don't know how to implement $\Phi^{-1}(1-\epsilon)$ as there is no direct function for $\Phi^{-1}$ . Also I wonder if I can solve it by convex optimization methods?