I would like to do the continuous-time analog of the following calculation for a discrete-time Markov chain:
Suppose I have a discrete-time Markov chain. To keep things simple, we can assume it's time homogeneous and the state space is finite. Then suppose I have a sequence of states, such as $x_1x_2x_3x_4$. I can calculate the probability of this sequence (conditioned on $x_1$ being the initial state) by simply multiplying the transition probabilities together: $$ p(x_1x_2x_3x_4) = p_{x_2x_1}p_{x_3x_2}p_{x_4x_3}, $$ where $p_{ij} = p(x(t+1)=i \mid x(t)=j)$ is the probability of transitioning from state $j$ to state $i$.
I would like to do a similar calculation for a continuous-time Markov chain, that is, to start with a sequence of states and obtain something analogous to the probability of that sequence, preferably in a way that only depends on the transition rates between the states in the sequence. (It's okay if it also depends on the self-transition rates, i.e. the diagonal elements of the transition rate matrix.)
Of course, this is complicated by the fact that "the probability of the sequence" isn't well defined unless I also specify how much time elapses. Because of this, I have two questions:
1) How can I calculate the probability of the given sequence as a function of elapsed time? I assume that for most sequences this will increase from zero to some finite value and then decrease to zero again as the elapsed time increases, since longer sequences will become more likely and start to outweigh the specified one. Because of this, I am guessing that this probability must depend on all the transition rates, and not just on the rates of the transitions that actually appear in the sequence.
2) A softer question: assuming the above is true, is there a more natural continuous-time analog of the discrete-time calculation above? I'm looking for the most natural way to go from sequences of states to "information about the dynamics," in an analogous way to the discrete-time calculation.
For simplicity, I’ll number the states in the sequence sequentially from $1$ to $n$, so $q_{ii}$ is the (negative) self-transition rate of the $i$-th state in the sequence and $q_{i,i+1}$ is the transition rate from the $i$-th to the $(i+1)$-th state in the sequence.
The probability for the chain to transition from $i$ to $j$ is
$$ p_{ij}=\frac{q_{ij}}{-q_{ii}}\;. $$
So the probability for the sequence to occur at all is
$$ p=\prod_{i=1}^{n-1}p_{i,i+1}=\prod_{i=1}^{n-1}\frac{q_{i,i+1}}{-q_{ii}}\;. $$
The time $\tau_i$ it takes for the chain to leave state $i$ is exponentially distributed with parameter $\lambda_i=-q_{ii}$. The probability that after time $t$ the chain has completed exactly the sequence of states from $1$ to $n$ and is still in state $n$ is $p$ times the probability of
$$ \sum_{i=1}^{n-1}\tau_i\lt t\lt\sum_{i=1}^n\tau_i\;. $$
The sum of exponentially distributed variables with different rate parameters is a hypoexponential distribution. If the rate parameters are all different, the probability density function (as given in that article and derived in these notes) of the left-hand sum is
$$ f(t)=\sum_{i=1}^{n-1}\lambda_i\mathrm e^{-\lambda_it}\prod_{j=1\atop j\ne i}^{n-1}\frac{\lambda_j}{\lambda_j-\lambda_i}\;, $$
and the probability for the state to remain in state $n$ for at least time $t$ is $\mathrm e^{-\lambda_nt}$, so the probability to observe the sequence at time $t$ is
$$ p\int_0^t\sum_{i=1}^{n-1}\lambda_i\mathrm e^{-\lambda_i\tau}\mathrm e^{-\lambda_n(t-\tau)}\prod_{j=1\atop j\ne i}^{n-1}\frac{\lambda_j}{\lambda_j-\lambda_i}=p\sum_{i=1}^{n-1}\lambda_i\frac{\mathrm e^{-\lambda_it}-\mathrm e^{-\lambda_nt}}{\lambda_n-\lambda_i}\prod_{j=1\atop j\ne i}^{n-1}\frac{\lambda_j}{\lambda_j-\lambda_i}\;. $$
As you expected, this increases with $t$ as the probability for the first inequality to hold increases, and then decreases again as the probability for the second inequality to hold decreases.
The Wikipedia article also states the probability density for the general case where the rate parameters are not all pairwise distinct.