I'm reading a financial mathematics paper and was wondering why the volatility of a price series might be given as a natural logarithm of time:
$$\sigma_T = \sigma \sqrt{\ln T}.$$
I would have thought, that the standard deviation would be given by:
$$ \sigma_T = \sigma \sqrt{T}. $$
Can anyone explain the logic behind the first equation? I don't understand why $\ln$ needs to be used. Thanks
I have never seen $\sigma\sqrt{log(T)}$ as a stock variance , it is very awkward as if the maturity of the stock is smaller than 1 year, the variance does not exist !
Usually when it comes to model a stock $S_t$ , a basic model would be to assume that $log(S_t)$ is a normal process.
Indeed, it guarantees the positivity of the stock price, also the variance of the stock is proportional to its value , and it is a feature that can be observed in the market, stock with high values will tend to move more than a penny stock. Obviously , it can be proved easily that a lognormal process has a lot of drawbacks, the constant $\sigma$ has never been seen (https://en.wikipedia.org/wiki/Volatility_smile), and historically , the returns are not normal ! However, it is the simplest well-defined distribution that we know.