Markov property of a random process (a solution of piece-wise deterministic equations)

92 Views Asked by At

Consider a piece-wise deterministic (Markov!) process \begin{eqnarray} \dot{x}(t) & = & A_{\theta(t,x(t))}x(t)\\ x(0) & = & x_0 \in \mathbb{R}^n \notag \end{eqnarray} where $\theta(t,x(t))\in S ={1,2,\cdots,N}$ is continuous time Markov chain(!) whose intensity is $\lambda_{ij}$ when $x(t)\in C_1$ and $\mu_{ij}$ when $x(t)\in C_2$. Here $C_1,C_2 \subseteq \mathbb{R}^n$, $C_{1}\cup C_{2}=\mathbb{R}^{^{n}}$ and $C_{1}\cap C_{2}=\phi$,. It is some kind of piece-wise deterministic process, where $x(t)$ is random because of the randomness of $\theta(t,x(t))$. The transition rate of $\theta(t,x(t))$ is $\lambda_{ij}$ or $\mu_{ij}$ depending on the set to which $x(t)$ belongs to. Let $x(0)\in C_1$ and define the first exit times (which can be proved as stopping times) $\tau_1,\tau_2,\cdots$ as $$ \tau_{1}=\mathrm{inf}\{t\ge 0 :\,x(t)\notin C_{1}\}, $$ $$ \tau_{2}=\mathrm{inf}\{t \ge \tau_{1}:\,x(t)\notin C_{2}\} $$ and so on. If $x(t)$ is $\mathcal{F}_t$-adapted process. Is $\theta(t,x(t))$ is also $\mathcal{F}_t$-adapted process? Within each stochastic intervals $[0,\tau_1)$, $[\tau_1,\tau_2)$ and so on, the process $\theta(t,x(t))$ intuitively seems a Markov process. Is there a way to prove this?

1

There are 1 best solutions below

1
On BEST ANSWER

Ok, if I understood correctly your equations, you have that a switching linear system $\dot x = A_\theta x$ where $\theta\in[1;N]$ is a continuous Markov chain whose intensity matrix depends on whether $x\in C_1$ or $x\in C_2$. Under some regularity conditions on $C_1$ and $C_2$ this process is indeed Markovian and falls into the class of PDP: piecewise-deterministic Markov processes.

The easiest way to show that $(x,\theta)$ is Markov is to embed it within the general framework of PDPs. According to the linked article, M. Davis was the first to use this term - at least, he has a very nice book on the topic. Anyways, all necessary definitions and conditions for Markovianity can be found in his apparently first paper on PDP.

Now, w.r.t. $\theta$: $\theta_t := \theta(t,x(t))$ is not a Markov process by itself as its distribution clearly depends on the value of $x(t)$.