Computing infinitesimal generator for a jump process with continuous component

301 Views Asked by At

I am trying to show that the infinitesimal generator of the following process

$$ dS_t = (\alpha S_{t-}+\beta )dt + (\gamma S_{t-}+\delta)dX_t,$$

where $X_t$ is a $(\lambda,G)$-compound Poisson process (that is $X_t=\sum_{k=1}^{N_t}Z_k$, $N_t$ is a Poisson process with intensity $\lambda$, and $Z_k\sim G, supp(G)=\mathbb{R}$) is given by

$$\mathcal{L}(f)(x)=(\alpha x+\beta)f'(x)+\lambda \int_{-\infty}^{\infty}[f(x+\gamma xz+\delta z)-f(x)]dG(z)$$

This is Proposition 8.6.7.1 from Jeanblanc, Chesney and Yor (2009) "Mathematical Methods in Financial Markets."

I was hoping if someone could clarify to me one of the steps in the proof of this result. The details are below.


The approach in the book is to apply path-by-path Stieltjes integration to write,

$$ f(S_t)-f(x) = \int_0^tf'(S_{s-})(\alpha S_{s-}+\beta)ds+\sum_{0\geq s\geq t} \Delta(f(S_t))$$ and then take the expectations of this expression to obtain

$$ \mathbb{E}^x[f(S_t)-f(x)]= \mathbb{E}^x\left[\int_0^tf'(S_{s-})(\alpha S_{s-}+\beta)ds+\sum_{0\geq s\geq t} \Delta(f(S_t))\right]\\ =\mathbb{E}^x\left[\int_0^tf'(S_s)(\alpha S_{s-}+\beta)ds\right]+\mathbb{E}^x\left[\sum_{0\geq s\geq t} f(S_{s-}+ \Delta S_s)-f(S_{s-})\right]$$

Up to this point, everything is clear. It is easy to show that the first term will converge to $(\alpha x+\beta)f'(x)$ when we divide by $t$ and take the limit as $t\downarrow 0$.

The step I do not understand is the following. The authors write:

$$ \mathbb{E}^x\left[\sum_{0\geq s\geq t} f(S_{s-}+\Delta S_s)- f(S_{s-})\right] \\ =\mathbb{E}^x\left[ \int_0^t\int_{\mathbb{R}} f(S_{s-}+(\gamma S_{s-}+\delta)z) -f(S_{s-})dG(z)\lambda dt\right] (*)$$

Why this equality follows? How to justify it? Or is there a way to proceed alternatively?


I tried writing this sum as $$ \mathbb{E}^x\left[\sum_{0\geq s\geq t} f(S_{s-}+\Delta S_s)-f(S_{s-})\right]=\mathbb{E}^x\left[ \int_0^t f(S_{s-}+(\gamma S_{s-}+\delta)Z) -f(S_{s-})dN_t\right]$$ I can then rewrite this expectations as $$\mathbb{E}^x\left[ \int_0^t f(S_{s-}+(\gamma S_{s-}+\delta)Z) -f(S_{s-})dN_t\right] = \mathbb{E}^x\left[ \int_0^t f(S_{s-}+(\gamma S_{s-}+\delta)Y) -f(S_{s-})d(N_t-\lambda t)\right] \\ + \mathbb{E}^x\left[ \int_0^t f(S_{s-}+(\gamma S_{s-}+\delta)Z) -f(S_{s-})d(\lambda t)\right] = \mathbb{E}^x\left[ \int_0^t f(S_{s-}+(\gamma S_{s-}+\delta)Z) -f(S_{s-})d(N_t-\lambda t)\right] + \\\lambda \int_0^t\int_{\mathbb{R}} f(S_{s-}+(\gamma S_{s-}+\delta)z) -f(S_{s-}) dG(z)ds$$ So if I could argue that the expectations of the first term is 0 I would be done. But the integrand is right continuous, and so I do not think it is a martingale.

I went very carefully through the book today, and it is hinted that they use a random measure, of the form $\mu=\sum_n \delta_{T_n,Y_n}$ and claim that $$ \int_0^t\int_{\mathbb{R}} f(S_{s-}+(\gamma S_{s-}+\delta)z) -f(S_{s-})d(\mu(dz,ds)-\lambda G(ds))$$ is a local martingale. But I am not sure how to use that fact.