Directional derivative of risk measure

449 Views Asked by At

Let $S,Z \in L^\infty(\Omega,\mathcal{F},\mathbb{P})$ and $\rho \colon L^\infty \rightarrow \mathbb{R}$ be a risk measure. When is the directional derivative (Gâteaux derivative) $$\lim_{\varepsilon \rightarrow 0} \frac{\rho(Z + \epsilon S) - \rho(Z)}{\varepsilon}$$ existing? How is it linked to the coherency of a risk measure? As an example, can we compute the directional derivative of the expected shortfall $$\mathrm{ES}_\alpha(X) = \frac{1}{(1-\alpha)} \int_\alpha^1 F^{-1}_X(p) \mathrm{d}p \quad ?$$

1

There are 1 best solutions below

1
On BEST ANSWER

$\newcommand{\<}{\langle}\newcommand{\>}{\rangle}\renewcommand{\Re}{\mathbb{R}}\newcommand{\ES}{\mathrm{ES}}$This is a very interesting question and the answer is affirmative under certain conditions. We may actually show that coherent continuous risk measures are Hadamard differentiable (which is stronger than Gâteaux differentiability)

About differentiability of risk measures. Let $\mathcal{Z}=\mathcal{L}_p(\Omega, \mathscr{F}, \mathrm{P})$ and let $F:\mathcal{Z}\to\Re\cup \{+\infty\}$ be a convex extended-real-valued mapping. Recall that the convex conjugate of $F$ is the function $F^*:\mathcal{Z}^*\to\Re\cup \{+\infty\}$ defined as $$F^*(Y) = \sup_{Z\in\mathcal{Z}} \{\< Y,Z \> - F(Z)\}$$ and similarly we define the function $F^{**}:\mathcal{Z}^{**}\to\Re$.

Let us also recall that the subgradient of $F$ at a point $X_0\in\mathcal{Z}$ is the set

$$ \partial F(X_0)=\{Y\in\mathcal{Z}^*: F(X) - F(X_0) \geq \< Y, X-X_0\>, \forall X\in\mathcal{Z}\}. $$

Then, by the conjugate subgradient theorem applied to $F^{**}$ we have that for $X$ such that $F^{**}(X)$ is finite we have $$ \partial F^{**}(X) = \arg\max \{ Y\in \mathcal{Z}^{*}: \<Y,X\> - F^*(X)\}.\tag{1}\label{eq:1} $$ If $\rho:\mathcal{Z}\to\Re\cup\{+\infty\}$ is a lower semicontinuous coherent risk measure (such as the expected shortfall you mentioned), then $\rho = \rho^*$, so by \eqref{eq:1} we have $$ \partial \rho(X) = \partial \rho^{**}(X) = \arg\max \{ Y\in \mathcal{Z}^{*}: \<Y,X\> - \rho^*(X)\},\tag{2}\label{eq:2} $$ for $X\in\mathrm{dom}\rho = \{X\in\mathcal{Z}: \rho(X) < \infty\}$.

Using that fact that coherent risk measures are written as

$$ \rho(X) = \sup_{Y\in\mathfrak{W}}\< Y,X\>, $$

for some convex w*-closed set $\mathfrak{W}\subseteq \mathcal{Z}^*$, we may see that the subgradient of $\rho$ at $X$ - following \eqref{eq:2} - can be written as

$$ \partial \rho(X) = \arg\max_{Y\in\mathfrak{W}}\<Y, X\>. $$ Let us now recall the definition of the Hadamard directional derivative which is defined for $F:\mathcal{Z}\to\Re\cup\{+\infty\}$ as follows

$$ F'(X;D_0)=\lim_{t\downarrow 0, D\to D_0} \frac{F(X+tD)-F(X)}{t}. $$

If a function $F:\mathcal{Z}\to\Re\cup\{+\infty\}$ is Hadamard directionally differentiable at $X$ along a direction $D$, then it is also Gâteaux directionally differentiable and the two derivatives coincide.

Furthermore, we know that if a convex mapping $F:\mathcal{Z}\to\Re\cup\{+\infty\}$ is finite-valued at $X$ and continuous at $X$, then $F$ is also subdifferentiable at $F$, $\partial F(X)$ is nonempty, convex, bounded and w*-compact and $F$ is Hadamard-diferentiable at $X$ with

$$ F'(X;D) = \sup_{Y\in\partial F(X)}\< Y, D\>. $$

Differentiability of the expected shortfall. The expected shortfall $\ES_\alpha:\mathcal{Z}\to\mathbb{R}\cup\{+\infty\}$ can be written in the following dual representation

$$ \ES_{\alpha}[X] = \sup_{Y\in\mathfrak{A}_\alpha}\< Y, X\> $$

where

$$ \mathfrak{A}_\alpha = \{ Y\in\mathcal{Z}^*: \mathbb{E}[Y]=1, Y\in [0,\alpha^{-1}] \text{ a.s.} \} $$ The expected shortfall is subdifferentiable with \begin{align*} \partial(\ES_{\alpha})[Z] &= \arg\max_{Y\in\mathfrak{A}} \<Y,Z\> \\ &= \arg\max_{Y\in\mathcal{Z}^*} \left\{ \<Y,Z\>: Y\in [0,\alpha^{-1}], \text{a.e. }, \mathbb{E}[Y]=1 \right\}. \end{align*}

To further proceed, let us assume that $X$ is a random variable described by a continuous cumulative probability function $H_X$ and define $t^*$ to be $$ t^* = \inf\{t: H_X(t)\geq 1-\alpha\}, $$ and further assume that $\{t: H_X(t) = 1-\alpha\}$ is a singleton for a given $\alpha$.

Interpretation: the expected shortfall is equivalently written as $$\ES_\alpha(X) = \inf\{t\in\Re: \phi(t):=t+\mathbb{E}[X-t]_+\}.$$Then, $\phi$ attains its minimum at $t^*$.

Relaxing the equality constraint $\mathbb{E}[Y]=1$ we have the Lagrangian \begin{align*} L(Y,\lambda; Z) &= \<Y, Z\> + \lambda(1-\mathbb{E} [Y])\\ &= \<Y, Z\> + \lambda - \<\lambda, Y\>\\ &= \<Y, Z - \lambda \> + \lambda. \end{align*}

We can now introduce the dual function

$$ \begin{aligned} q(\lambda) &= \sup_{Y\in [0,\alpha^{-1}]} L(Y,\lambda)\\ &= \sup_{Y\in [0,\alpha^{-1}]} \<Y, Z - \lambda \> + \lambda\\ &= \sup_{Y\in [0,\alpha^{-1}]} \int Y(Z-\lambda)\mathrm{dP} + \lambda, \end{aligned} $$ the supremum is attained for $Y=\alpha^{-1}1_{[Z-\lambda\geq 0]}$, so $$ q(\lambda) = \alpha^{-1}\mathbb{E}[Z-\lambda]_+ + \lambda. $$ The dual problem is $$ \inf_{\lambda\in\Re}\ \alpha^{-1}\mathbb{E}[Z-\lambda]_+ + \lambda. $$ The set of its minimizers is a bounded set, so we have strong duality.

Under the above assumptions ($H_X$ being continuous, $\{t: H_X(t)=1-\alpha\}$ being a singleton and for $t^*$ as above), $Y\in\partial \ES_\alpha(X)$ if and only if

$$ \mathbb{E}[Y]=1\\ X>t^* \implies Y = \alpha^{-1}\\ X<t^* \implies Y=0\\ X=t^* \implies Y\in [0,\alpha^{-1}] $$

Finally, the directional derivative of $\ES_\alpha$ at $X$ along $D$ will be

$$ \ES'_\alpha(X;D) = \sup_{Y\in\partial\ES_\alpha(X)}\<Y,D\> $$