Let $c(x)$ be a bounded and strictly positive function on the finite state space $S$. Let $X(t)$ be a Markov chain taking values in $S$, such that its staying times are all $\exp(1)$. Now let $f(t) = \int_0^t \frac{1}{c(X(s))}ds$ which is a bijection from the nonnegative reals to the nonnegative reals. Consider the process $Y(t) = X(f^{-1}(t))$.
Now one thing I managed to show without much difficulty is that $Y$ has staying times which are distributed as $\exp(c(x))$ at $x$. However, I wish to show that $Y$ is in fact a Markov chain and that it has the same embedded chain as $X$. Now the embedded Markov chain is defined using the transition probabilities $p(x, y) = q(x,y) / \sum_{z \neq x} q(x, z)$ for $y \neq x$ and $p(x, x) = 0$. But I don't know how the $Q$ entries for either chain are related, nor why $Y$ is a Markov chain.
I think this needs to be more formalized, but here is an answer:
Note that $f(t)$ is a bijection any $t^*_n$ can be written as $f(t_n)$ for some $t_n$ and thus $P(Y(f(t_n)) = x_n \mid Y(f(t_1)) = x_1, ..., Y(f(t_{n-1})) = x_{n-1}) = P(X(t_n) = x_n \mid X(t_1) = x_1, ..., X(t_{n-1}) = x_{n-1})$.
Now, we know that since $f$ is monotone (which follows from the positivity of $c(x)$), $f(t_1) < ... < f(t_n)$ implies $t_1 < ... < t_n$ and thus you have Markov property of $Y_t$ from $X_t$.
As per the embedding chain being the same, if $\tau$ is the stopping time for which $X_t$ first changes, $Y(f(\tau))$ will be the first time $Y_t$ changes (again, by monotonicity). Now, conditioned on the fact that $X_t$ is transitioning, the probability of each transition is governed by the embedded probabilities, and that holds true for $Y(f(\tau)$ as well.