Time-varying variance confidence interval using variance stabilizing transformation

149 Views Asked by At

I'm having some difficulties understanding a question for a Financial Engineering assignment and I was hoping I could get some intuition on this subject.

Basically, we start with a time-series of daily log-returns on the market (computed from the Fama-French market returns since 1926). This time-series looks like this. We are then asked to compute a rolling variance (window of 60 days) for every day in the sample (except naturally for the first 60 days). So far so good. But then, we are asked to compute confidence intervals using variance-stabilizing transformations.

That's where I get lost. My first idea was to compute confidence bounds for the rolling variance, using the Chi²-distribution's critical values and 59 degrees of freedom. When I do that, the bounds follow the time-varying variance nicely. But for this I didn't use a Variance-Stabilizing Transformation.

If I try to follow the example our professor showed us on VST, I don't understand how it could apply to this problem. What he does:

  • Compute first the daily mean and variance of the log-returns

  • Use the computed daily mean and variance to simulate 10000 times 24057 daily returns (number of days in our original dataset) as: simulated return = standard normal variable N(0,1) * daily standard deviation + daily mean

  • Compute for every simulation (10000 times) the sample variance of 24057 simulated returns (in our case, for 5000 simulations, this would lead to this)

  • Compute t-values for each simulation as t = (variance of simulation - original variance of daily log-returns)/square root of (2 * variance of simulation ^ 2 / 24057).

  • Use a VST on t-values in the following way: t = (log(variance of simulation) - log(original variance of daily log-returns))/square root of (2 / 24057)

First of all, these new t-values seem to be nearly identically distributed to the t-values without VST. So why bother using the VST? Second, I don't see how computing the 2,5% and 97,5% percentiles of this t-distribution of simulated variances can stabilize the time-varying variance we had with the original time-series of log-returns.

In a follow-up question, we have to compute a 5% Value at Risk for every day of the sample, starting day 60, so I presume we will need to use the fact that the variance is time-varying..

Hope that was a bit clear. Could someone shed some light on this? Thank you in advance!