A dice is thrown an infinite number of times. Which of the following procsses are Markov chains or not? Justify your answer. For those processes that are Markov chains give the transition probabilities. For $n\in\mathbb{N}$ consider
(1) $U_n$ to be the maximal number shown up to time $n$.
(2) $V_n$ to be the number of times a 6 appears within the first $n$ throws.
(3) $X_n$ to be the time which has passed since the last appearance of a $1$.
(4) $Y_n$ to be the time which will pass until the next $4$ appears.
A complete edit of my ideas
In my opinion all are Markov processes.
(a) is a Markov provess with state space $E=\left\{1,2,3,4,5,6\right\}$, because the Markov property is fullfilled:
In order to predict $U_{n+1}$ I only need to know the maximal number shown up to time $n$, i.e. $U_n$, because then I know that $U_{n+1}$ is either one of the numbers that are $> U_{n}$ or it is equal to $U_n$. I do not have to know $U_1,...,U_{n-1}$.
The transition matrix is $$ P=\begin{pmatrix}1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6\\0 & 1/3 & 1/6 & 1/6 & 1/6 & 1/6\\0 & 0 & 1/2 & 1/6 & 1/6 & 1/6\\ 0 & 0 & 0 & 2/3 & 1/6 & 1/6\\ 0 & 0 & 0 & 0 & 5/6 & 1/6\\ 0 & 0 & 0 & 0 & 0 & 1\end{pmatrix} $$
(b) is a Markov chain with state space $E=\mathbb{N}\cup\left\{0\right\}$; the Markov property is fullfilled:
In order to predict $V_{n+1}$ I only have to know the number of times a 6 appears within the first $n$ throws, i.e. I have to know $V_n$, because then I know that $V_{n+1}$ is either $V_{n}+1$ or remains $V_n$.
Here the transition probabilities are $$ p_{ij}=\begin{cases}1/6, & j=i+1\\5/6, & i=j\end{cases}. $$
(c) is a Markov chain with state space $E=\mathbb{N}\cup\left\{0\right\}$. Again the Markov property is fullfilled:
If I want to predict $X_{n+1}$ (which I do understand as the time at throw number $n+1$ that has passed since the last 1 appeared), I only need to know the time that has passed at throw number $n$ since the last 1 appeared. Then I can predict that $V_{n+1}=0$ (if I throw a 1 at throw number $n+1$) or $V_{n+1}=V_n+1$.
So to my opinion the transition probabilities are given by $$ p_{ij}=\begin{cases}1/6, & j=0\\5/6, & j=i+1\end{cases}. $$
(d) Here I am not that sure. But I think here we have a Markov chain, too. Because I think here we do not need the past throws at all, because the time to the next throw of a $4$ does not depend at all from the past throws. So it makes no difference if we conditional on the whole past or only on $Y_n$ in order to predict $Y_{n+1}$. So the Markov property is fullfilled.
To my opinion the transition probabilities are $$ p_{ij}=\begin{cases}1/6, & j=1\\5/6, & j=i+1\end{cases}. $$
Would be really nice to hear from you, if I am right or not.
Greetings
You say "because the future depends on the past," but this is not the correct characterization of a Markov Chain: for a Markov chain if you condition on the past, it only depends on the last value. So for $(2)$, $V_n$ conditioned on $V_{n-1},\cdots,V_1$, clearly depends only on $V_{n-1}$. Afterall, the number of 6's that occur in n throws satisfies $V_n=V_{n-1}+T_n$, where $T_n=1$ if the n'th throw is a 6 and 0 otherwise. Since dice throws are independent, $T_n$ is independent of $V_{n-1},\cdots,V_1$, making $P(V_n|V_{n-1},\cdots,V_1)=P(V_n|V_{n-1})$.
The same goes for (3). Since your throws are independent, this is what's called a renewal process: every time you throw a 1, it resets and is indepedent of everything prior. If i tell you that 20 minutes have passed since the last 1, then you don't need to know what happened 19 minutes ago, because the next throw is independent of the entire past, so if you want the new time passed since the last throw, you need only the last time update. In math language, the time $\tau_n$ since the last 1 was thrown has the following property: $\tau_n-\tau_{n-1}$ is independent of all $\tau_{i}-\tau_{i-1}$ for $i=1,2,\cdots,n-1$.