Continuous time Markov chain, expectation of transitions in an interval

109 Views Asked by At

For a continuous time markov chain we have these assumptions and definitions:

Assumption 1: We assume that for any states i and j and any times t and t+s where $s\ge 0$, the conditional probability $P(Y(t+s)=j|Y(t)=i)$ is well-defined in the sense that its value does not depend on any information about the process before time t.

Assumption 2: We assume that for any positive time interval h:

P(Two or more transitions within a time period of length h)=o(h).

That a function $f$ is $o(h)$ means that $\lim\limits_{h\rightarrow 0}\frac{f(h)}{h}=0$.

Definition 1:

$_tp_x^{ij}=P(Y(x+t=j)|Y(x)=i)$

Definition 2:

$\mu_x^{ij}=\lim\limits_{h \rightarrow 0^+}\frac{_hp_x^{ij}}{h}$ for $i\ne j$.

Assumption 3: For all states i and j and all $x \ge 0$ , we assume that $_tp_x^{ij}$ is a differentiable function in t.

Lemma 1: $_hp_x^{ij}=\mu_x^{i,j}h+o(h)$. Proof: Omitted

Assume that you at time x start at state i. A book that I am reading says that the expectation of the number of jumps in an interval $[x+t,x+t+\Delta t]$ from state j to k is:

$_tp_x^{ij}\cdot \mu_{x+t}^{jk}\Delta t+o(\Delta t)$.

Can someone please explain why this is the case? Why do we have $o(\Delta t)$?

The reason I am wondering is that this quanity must be:

$\sum\limits_{l=2}^{l=\infty}l\cdot P(\text{l transitions from j to k in the interval})$.

I don't see why this quantity is $o(\Delta t)$? The probabilities are $o(\Delta t)$ sure, but we must multiply them with $l$, and there is even the possibility of an infinite amount of transitions in the interval, and then the quantity becomes infinity? We also have the problem that there is a countable, not finite, amount of $o(\Delta t)$-functions.

Can you please help me?