I have a question about converting a 3-state discrete state, continuous-time, markov chain to a 2-state.
My 3-state model has states: Well (state 1), Ill (state 2) and Dead (state 3).
$$\begin{bmatrix}-(a12 + a13) & a12 & a13\\0 & -a23 & a23\\ 0 & 0 & 0\end{bmatrix}$$ This 3 state matrix is full.mat in the R code.
I would like to convert it to an Alive/Dead model. I am not sure if I can do it the following way: $$\begin{bmatrix}-(a13 + a23) & (a13+a23)\\0 & 0\end{bmatrix}$$ where I am simply adding the intensity of Well->Dead, and Ill->Dead to compute the intensity of Alive->Dead for a 2-state model? This matrix is small.mat in the R code.
I would expect that the sum of transition probabilities P(1->3) + P(2->3) from the three state model should equal P(alive -> dead) in the 2-state model.
Essentially, I am trying to determine $$\mathrm{Pr}(X(t+h) = 3 | X(t) =1~~OR~~X(t) =2)$$ But the final 2 lines of the R-code show that these values are not equivalent, they are slightly off... Am I doing things incorrectly, or is this just rounding approximation by expm()?
library(expm)
full.mat<- rbind(c(-0.003260632, 0.000514263, 0.002746369),
c(0.000000000, -0.007948859, 0.007948859),
c(0.000000000, 0.000000000, 0.000000000))
small.mat<-matrix(0,2,2)
small.mat[1,2]<-full.mat[1,3]+full.mat[2,3]
small.mat[1,1]<-small.mat[1,2]*-1
exp.full<-expm(full.mat)
exp.small<-expm(small.mat)
# COMPUTE PROBABILITY OF DEATH
exp.small[1,2] # this is probability of death in 2-state model
exp.full[1,3]+exp.full[2,3] # this is probability of death
in 3-state model
The first thing to realize is that in this model one needs to know whether one is well-being or ill to know the "chances" one has to become dead.
The exception is when $a_{13}=a_{23}=\alpha$, then the alive/dead process is indeed a Markov process on the state space $\{\mathtt{alive},\mathtt{dead}\}$ with rate transition matrix $\begin{pmatrix}-\alpha &\alpha\\ 0 & 0\end{pmatrix}$. In every other case, the usual Bayes decomposition yields $$ \mathbb P(X(t+\mathrm dt)=3\mid X(t)=1\ \text{or}\ 2)=\alpha(t)\mathrm dt, $$ where $$ \alpha(t)=\frac{a_{13}p_1(t)+a_{23}p_2(t)}{p_1(t)+p_2(t)},\qquad p_i(t)=\mathbb P(X(t)=i). $$ Note that each $p_i(t)$ depends on $t$ and on the initial distribution $(p_1(0),p_2(0))$. Recall that $$ p_1(t)=p_1(0)\mathrm e^{-(a_{12}+a_{13})t},\quad p_2(t)=p_1(0)c(t)+p_2(0)\mathrm e^{-a_{23}t}, $$ for some explicit function $c(t)$ you might want to write down.
To sum up, call $Y(t)=\mathtt{dead}$ if $X(t)=3$ and $Y(t)=\mathtt{alive}$ otherwise. Then $(Y(t))_{t\geqslant0}$ is not (in general) a Markov process on the state space $\{\mathtt{alive},\mathtt{dead}\}$ because the distribution of the state $Y(t+\mathrm dt)$ depends on the distribution of the state $Y(t)$ (good), on $t$ itself (medium good), and also on the initial distribution of $X(0)$ (not good).