How do you interpret the infinite activity property of variance gamma process?

204 Views Asked by At

I am a little bit confused with the infinite activity property of variance gamma(VG) process $X(t),$ where $$X(t)=\theta G(t) + \sigma W(G(t)),$$ for any finite interval, the VG process has infinitely many jumps.
My current understanding is:
For any $h>0$, we can find a $\varepsilon$ such that $|X(t+h)-X(t)|>\varepsilon$, and as $h \rightarrow 0$, $\varepsilon \rightarrow 0$, which means for any time step $h$, there exist a small jump, and the magnitude of the jumps becomes infinitesimally small as the rate of jumps tends to infinity. Am I right?
In another hand, I also have question about the Gamma process, from Wiki I found the Gamma process often write as $\Gamma (t;\gamma ,\lambda )$, what is the expression of $\Gamma (t;\gamma ,\lambda )$? And what does Gamma process mean, does it mean the jump size during a time step $t$?

1

There are 1 best solutions below

0
On

This will be a partial answer.

First, there are things to know about jumps of monotone functions before you get into anything involving probability.

  • A nowhere decreasing function cannot have any discontinuities other than jumps.
  • The set of jumps that it has in each interval in its domain is either finite or countably infinite; it cannot be uncountably infinite.
  • In every interval $I$ in the domain and for every positive number $a,$ the number of jumps of size at least $a$ that such a function has within the interval $I$ is finite.

And there are things to know about the gamma distribution before getting into the gamma process:

  • The gamma distribution is this probability distribution: $$ \frac 1 {\Gamma(\alpha)} (\lambda x)^{\alpha-1} e^{-\lambda x} (\lambda\, dx) \quad\text{for } x>0. $$ The two parameters $\alpha,\lambda$ are positive.
  • The reason why $\alpha-1$ rather than $\alpha$ is seen in the exponent is that if two independent random variables have gamma distributions with parameters $\alpha_1,\alpha_2$ then their sum has a gamma distribution with parameter $\alpha_1+\alpha_2,$ and similarly for more than two.

Now break a time interval of length $\alpha$ into tiny time intervals of lengths $\alpha_1,\ldots,\alpha_n.$ Let the value of a random process at time $0$ be $0,$ and let its value at time $\alpha_1$ be a gamma-distributed random variable with parameter $\alpha_1,$ and let its value at time $\alpha_1+\alpha_2$ be the sum of that first gamma-distributed random variable and another, independent of it, with paramater $\alpha_2,$ and so on, and similarly if one breaks that short interval of length $\alpha_1$ into yet smaller intervals. This is an infinitely divisible process. Such is the gamma process.

The value of the expression $\Gamma(t;\gamma,\lambda)$ is the value of such a process at time $t$ when the $\alpha$ parameter is $\gamma t.$ Thus for every $t>0,$ $\Gamma(t;\gamma,\lambda)$ is a random variable whose distribution is the gamma distribution above with $\alpha=\lambda t.$

That value is the sum of all of the jump sizes between time $0$ and time $t.$