I have an environment described by $\{J_t \}$ a continuous-time Markov process with finite state-space $E$. Denoting a function $p: E \rightarrow \mathbb{R}^{+}$, I'm looking at $X_t=\int_{0}^{t}p_{J_s}ds$.
I am told that without loss of generality, I may suppose that $p(\cdot) \equiv c$ where $c$ is a constant and the reason is that I may consider the time change $T(t)=\int_{0}^{t}\frac{1}{p_{J_s}}ds$ and the process $X_{T(t)}$.
I don't understand why. I tried considering $\int_{0}^{t}I_{\{J_s=i\}}ds$ which is the time spent in state $i$ up to time $T$ and rewriting $\int_{0}^{t}p_{J_s}ds= \int_{0}^{t} \sum_{i \in E}I_{\{J_s=i \}}p_{i}ds$ but it's not helping. Any ideas? Thank you.
Let me first recall how does a continuous-time finite-state Markov process behave. It can be described in terms of embedded Markov chain $\{Y_n,n\ge 1\}$ and a set of intensities $\{\lambda_i, i\in E\}$. When $J$ gets to a state $i$, it stays there a random time, which has exponential distribution with parameter $\lambda_i$, and then switches its position according to $Y$. This random time can alternatively described as $\xi/ \lambda_i$, where $\xi\simeq\mathrm{Exp}(1)$. That said, the $n$th jump of $J$ occurs at time $$T_n = \sum_{k=1}^n \frac{\xi_k}{\lambda_{Y_k}},\tag1$$ where $\xi_k$ are iid $\mathrm{Exp}(1)$ random variables, and $J_t = Y_n$ for $t\in [T_{n-1},T_n)$.
Now write $$ \int_0^{T_n} p(J_s) ds = \sum_{k=1}^{n} \int_{T_{k-1}}^{T_{k}} p(J_s) ds = \sum_{k=1}^{n} (T_k - T_{k-1}) p(Y_k)\\ = \sum_{k=1}^{n} \frac{ p(Y_k)}{\lambda_{Y_k}} \cdot \xi_k = \sum_{k=1}^{n} \frac{ \xi_k}{\lambda'_{Y_k}} =: T_n' = \int_0^{T_n'} ds, $$ where $\lambda'_i = \lambda_i/p(i)$, $i\in E$. Similarly to $(1)$, $T_n'$ is the time of $n$th jump of a continuous-time Markov process $J'$ with the same embedded Markov chain $Y$ and intensities $\lambda'$.
In terms of the time transformation from your question, it is not hard to see that $J'_t = J_{T(t)}$ and $\int_0^{T(t)} p(J_s) ds = \int_0^{t} ds = t$.