Numerical simulation of SDE with Lévy noise in Python - Overflow issue

370 Views Asked by At

My goal is to simulate a SDE with alpha-stable noise in Python. This is my code:

import numpy as np
import scipy.stats as stats

N = 500 #time steps
T = dt*N #total time
ts = np.arange(0,T, dt, dtype=np.float64)

xs = np.zeros(N, dtype=np.float64)
xs[0] = 0 #inital condition

sigma = 0.001 #noise level
a = 0.5 #alpha   
    
for i in range (N-1):
    xs[i+1] = xs[i] + dt*(xs[i]*xs[i] - xs[i]) +  sigma*np.sqrt(dt)*stats.levy_stable.rvs(alpha=a,beta=0)

For $\alpha < 1 $ I will at times run into the following warnings:

RuntimeWarning: overflow encountered in double_scalars
  xs[i+1] = xs[i] + dt*(xs[i]*xs[i] - xs[i]) +  sigma*np.sqrt(dt)*stats.levy_stable.rvs(alpha=a,beta=0)
RuntimeWarning: invalid value encountered in double_scalars
  xs[i+1] = xs[i] + dt*(xs[i]*xs[i] - xs[i]) +  sigma*np.sqrt(dt)*stats.levy_stable.rvs(alpha=a,beta=0)

I think I understand why these errors occur: For small $\alpha$ the distribution becomes increasingly heavy tailed and has infinite variance, making it very likely that at some point the simulation takes too many jumps in one direction and falls off the edge.

My question is about how to best deal with this in a practical setting. Would it be legitimate to constrain the simulation to stay within the range numpy is ok with? Is there a more elegant approach? And did I diagnose this issue correctly?

Any thoughts appreciated!