Central Limit Theorem for time-continuous Markov Process

114 Views Asked by At

Suppose we have a time-continuous Markov-Process $(X_t)_{t\in\mathbb{R}}$. If $X_0=j$, then $X_t$ needs a random amount of time, say $T_1$, before the process jumps to a new state $i$. $T_1$ is a continuous random variable. All the random variables $T_1, T_2, T_3,\dots$ between the states are exponentially distributed, i.e. $T_k\sim\text{Exp}(\lambda_k)$, for $\lambda>0$ and $k\in\mathbb{N}$. The rate of jumping from from a $j\in X_t$ to $i$ is given by a certain density $\sigma_t(j\to i)$.

Now a asked myself if it was possible to apply the central limit theorem on this certain situation? And if yes, how? Thank you very much for your help!

1

There are 1 best solutions below

2
On BEST ANSWER

In the case where the CTMC is finite and time-homogeneous, i.e., $\sigma_t$ is independent of $t$ and $\lambda$ only depends on the current state (which I'll denote al $\lambda_j$ for state $j$), you can apply the CLT to the average waiting time in each state.

Formally, let $(i_n)_{n\in\mathbb{N}}$ be the sequence of indices s.t. we are in state $j$ after the $i_n$-th transition. Then $S_{j,n}=\sum_{k=0}^m \frac{T_{i_k}}{n}$ gives the average time for a transition from $j$, and by the CLT this approaches a normal distribution with expected value $1/\lambda_j$ and variance $1/(\lambda_j^2n)$, i.e., $S_{j,n}\approx \mathcal{N}(\frac{1}{\lambda_j},\frac{1}{\lambda_j^2n})$ for large $n$.

Moreover, you can compute the expected number of visits for each state by viewing your process as a discrete time Markov chain. Let $E_n(j)$ denote the number of times we expect to visit state $j$ within the first $n$ transitions (regardless of how much time they take). Let $P(j,k)$ denote the probability of being in state $j$ after $k$ transitions. Clearly $E_n(j)=\sum_{k=0}^nP(j,k)$. In the limit, $P(j,n)$ approaches the steady-state probabilities when viewing the system as a discrete time Morkov chain. So if we denote the steady-state probability of state $j$ as $E(j)$ we have $E_n(j)\approx nE(j)$ for large $n$.

Then the expected time of a run of length $n$ is just the expected number of times we visit state $j$, times the expected waiting time in state $j$, summed over all states:

$$\sum_{j\in S}nE(j)\mathcal{N}(\frac{1}{\lambda_j},\frac{1}{\lambda_j^2n})$$

for large $n$, which is just a mixture of normal distributions.