Often Markov process are linked to automata with transition probilities, and I am looking for ways to define them to make this intuition more explicit in the definition using a definition of automata. For this I am looking for a way to relate the usual definition with the definition of finite state machines.
I came up with a definition given by a transition map $g : \mathbb R \times S \to \mathbb S$, where the first parameter in some way codes the input, which is given by a "control process" which accounts for the stochastic inputs (i.e. the result of a random experiment). But is this definition equivalent to the usual one for discrete, finite state markov processes, i.e. are the following two definitions equivalent:
A discrete, finite state Markov process is a stochastic process $X_n, n = 1,2,3,\ldots$ where $X_n : \Omega \to S$ are random variables with finite image $|S| < \infty$ and such that $$ P(X_{n+1} = t_{n+1} | X_n = t_n, \ldots, X_1 = t_1 ) = P(X_{n+1} = t_{n+1} | X_n = t_n ) $$ for all $n$ and $t_1, \ldots, t_{n+1} \in S$.
And the other definition:
A discrete time, finite state Markov process is a stochastic process $X_n$ such that there exists some control process $Z_n : \Omega \to \mathbb R, n = 1,2,\ldots$ and a function $g : R \times S \to S$ such that $X_{n+1} = g(Z_n, X_n)$.
I could also propose a control process $g : \mathcal A \times S \to S$ where $\mathcal A$ is some $\sigma$-algebra from some probability space, highlighting more that the events of some random experiment serve as inputs (and it is easy to see that for finite $\mathcal A$ we can arrange, by collapsing events, that $g(A, s) \ne g(B, s)$ for $A \ne B$, thereby having not two "parallel" transitions going from a state to the same state).