how objective function with "value at risk " term linearized?

79 Views Asked by At

in value at risk problem , I encountered this scenario based problem :

$\min \{c^Tx+min\{t: pr(\sum_s b^s y^s <t )\ge 1-\alpha\}$

subject to

$Ax \le d $

$B^s x+D^s y^s \le h^s $

$x,y^s \ge0$

where " pr "is probability. s Is scenario.

in value at risk we usually have $ min\{t: pr(\sum_s b^s y^s <t )\ge 1-\alpha$

I can't linearized this problem.

How can I solve? How to be linearized?

my try is:

suppose $min\{t: pr(\sum_s b^s y^s <t )\ge 1-\alpha\} = T$

then we have

$\min \{c^Tx+T\}$

subject to

$ T \ge pr(\sum_s b^s y^s <t )\ge 1-\alpha$

$Ax \le d $

$B^s x+D^s y^s \le h^s $

$x,y^s \ge0$

then we muust linearized it, so

$\min \{c^Tx+T$

subject to

$ pr(\sum_s b^s y^s <t )\ge 1-\alpha$

$ pr(\sum_s b^s y^s <t )\le T$

$Ax \le d $

$B^s x+D^s y^s \le h^s $

$x,y^s \ge0$

then according to chance constrain linearization

$\min \{c^Tx+T\}$

subject to

$\sum_s b^s y^s <t +M (1- \delta_1 ^s)$

$\sum_s p^s \delta_1 ^s \ge 1- \alpha $

$ \sum_s b^s y^s <t +M (1- \delta_2 ^s)$

$\sum_s p^s \ \delta_2 ^s\le T $

$Ax \le d $

$B^s x+D^s y^s \le h^s $

$x,y^s \ge0 $ $ \delta_1 ^s, \delta_2 ^s \in \{0,1\}$

1

There are 1 best solutions below

6
On

I'm going to answer here based on my best interpretation of what you're asking.

The problem you wish to solve is

\begin{equation} \begin{array}{rl} \min\ & c^\text{T}x + \text{VaR}_{1-\alpha}(b(\xi)^\text{T}y) \\ \text{s.t.}\ & B(\xi)x+D(\xi)y\leqslant h(\xi) \\ & x,y\geqslant 0 \end{array} \end{equation}

where $\xi$ is a random variable with finite support (i.e. has a finite number $N$ of realizations, which we index $s=1,\dots,N$). An equivalent formulation on this problem is

\begin{equation} \begin{array}{rl} \min\ & c^\text{T}x + t \\ \text{s.t.}\ & t\geqslant\text{VaR}_{1-\alpha}(b(\xi)^\text{T}y) \\ & B(\xi)x+D(\xi)y\leqslant h(\xi) \\ & x,y\geqslant 0 \end{array} \end{equation}

Now, the definition of VaR is

$$ \text{VaR}_{1-\alpha}(b(\xi)^\text{T}y)=F_Y^{-1}(\alpha), $$

where $F_Y$ is the CDF of the random variable $b(\xi)^\text{T}y$. Since CDFs are non-decreasing by definition, the following are equivalent:

$$ t\geqslant\text{VaR}_{1-\alpha}(b(\xi)^\text{T}y)\iff F_Y(t)\geqslant\alpha\iff\text{Pr}[b(\xi)^\text{T}y\leqslant{t}]\geqslant\alpha. $$

Basically, we have reduced our problem to the following program:

\begin{equation} \begin{array}{rl} \min\ & c^\text{T}x + t \\ \text{s.t.}\ & \text{Pr}[b(\xi)^\text{T}y\leqslant{t}]\geqslant\alpha \\ & B(\xi)x+D(\xi)y\leqslant h(\xi) \\ & x,y\geqslant 0 \end{array} \end{equation}

The problem is that this formulation doesn't make much sense. If we want to solve a problem with recourse then the constraint

$$ \text{Pr}[b(\xi)^\text{T}y\leqslant{t}]\geqslant\alpha $$

doesn't make much sense, since we are choosing $y$ after the random variable $\xi$ is realized. On the other hand, if we are doing a true chance constraint, then we must choose $y$ before $\xi$ is realized, in which case, it's not clear what the constraint

$$ B(\xi)x+D(\xi)y\leqslant h(\xi) $$

means. Do we want to enforce this for all realizations of $\xi$? Just some? Do we want to enforce it as a chance constraint?

(It's sort of uncommon for scenario-based recourse models and chance constraints to be used in the same model).