I have a question, maybe it is simple, maybe it doesn't make sense, but I haven't seen this addressed anywhere - what happens when the negative interest rates result in a badly defined formula for discounting?
The discount factor formula for period $[0,t)$ is given as $$ DF(0,t ) = \frac{1}{(1 + r_t \cdot t)}, $$
where $r_t$ represents the interest rate $r_t$ for the period $[0,t)$.
What happens mathematically when $r_t$ is negative, say $-0.03$? If the time period is $t = 30$, this formula results in a fraction with a denominator, close to $0$, i.e. we get $\frac{1}{0.1} = 10$, which is an absurd discount factor. I know this is an unrealistic example, but it is mathematically possible that we divide by a number close to zero, resulting in absurdly big discount factors.
Furthermore, a negative $r_t$ can shift the expression in the denominator onto the negative real axis, resulting in a negative discount factor, which doesn't make sense either.
I am performing some tests where different future interest rate curves are being generated and some instances produce such examples where the curve stays negative even after $30$ years or more. The discount factors are calculated and they are wildly wrong, exceeding 100s or more and therefore 'ruining' some of my 'averaging' procedures.
I'd like to know if I'm using the formula wrong? I'd like to see if there is some literature on this, could someone refer me to it? Anyone is welcome to share their opinion or experience about this.
I think your discount factor might be better as $\frac{1}{(1 + r_t )^t}$, i.e. compounding. It would be similar to $\frac{1}{(1 + r_t \cdot t)}$ when $r_t$ and $t$ are close to $0$
Then you could happily have cases of $-100\% \lt r_t \lt 0\%$ without the discount factor itself going negative
Recently there have been cases round the world of negative interest rates, and there have been many more cases over time of negative real interest rates, so this is not implausible