I'm learning about continuous-time Markov chains and am trying to hone my intuition.
Let $T_1, T_2, \ldots$ be i.i.d random variables.
Let $S$ be some state space, we can take it to be finite. Let $P$ me some Markov transition matrix defined on $S$. Let $X_1, X_2, \ldots$ be a corresponding sequence of random variables distributed according to this discrete-time Markov chain.
Consider the following process: Start in state $X_1$, wait for time $T_1$, transition to state $X_2$, wait for time $T_2$, transition to state $X_3$, etc...
My understanding is that this defines a continuous-time Markov chain if and only if the $T_i$ are exponentially distributed. Correct?
So if the $T_i$ are, say, uniformly distributed in $[0,1]$, then we don't get a Markov process? How exactly does having uniform hold times give the process memory?
Suppose $T_i$ are iid uniform over (0,1). If you have waited 0.9 units of time, then you know with certainty that there will be an event within 0.1 unit of time. On the other hand, if the last event happened 0.5 units of time ago, then the next event will happen in 0.1 with only 20% probability. Therefore, future events are not independent of past events conditioned on the present. That is, the process is not Markov.
The exponential distribution is special because of its memoryless property. That is, if $T_i$ are iid exponential, then it doesn’t matter how long you’ve been waiting, the probability that he next event will happen within 0.1 unit of time will remain the same. That is,
$$\mathbb{P}(T_i < s+ t| T_i > t) = \mathbb{P}(T_i < s)$$