I'm currently talking an intermediate course in finance where we want to calculate Value-at-Risk for portfolios and bonds. To use this VaR formula I need to know the standard deviation for different confidence intervals. Now my teacher have put up the following standard deviation for different confidence intervals:
C.I 90 = +/- 1,64 S.d
C.I 95 = +/- 1,96 S.d
C.I 98 = +/- 2,33 S.d
C.I 99,9 = +/- 3,09 S.d
When I watched an old exam for calculating VaR, the C.I was 99% and the student wrote that the S.d was equal to 2,33. How is this possible? (P:s the student got an A on this exam).
Any help would be welcomed.
This is correct. Have a look at this standard table of the normal distribution probability. You find that, for $Z \sim N(0,1)$, $$ P(-\infty<Z\le 2.33)\approx0.9901 $$ this comes from a numerical evaluation of $$ \Phi(2.33)\; = \;\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{2.33} e^{-t^2/2} \, dt\approx \color{blue}{0.99009692444083574978997\cdots} $$ (see also this).