Suppose I have a sequence of discrete RV's $X_1, X_2, X_3, \dots, X_n$ each with arbitrary distributions, but such that each $X_i$'s distribution was conditional on the value, $X_{i-1}$, in a Markovian manner
A simple example is that each $X_i \approx Binomial(n=X_{i-1}, p=\rho_i)$ with a fixed $X_0=\alpha$, where the $n$ value is distributed according to the preceding $X_{i-1}$.
A simple analogy would be a series of tests a cohort of $X_0$ people started, where an individual failing a test meant their removal from the rest of the process, so the number entering each test depended on the previous successful participants, and the number passing each test was randomly distributed.
I understand a crude way to calculate the distribution of $X_n$ (if the distributions are arbitrary or without closed form) would be to calculate iteratively for $i=1...n$, given a fixed $X_0$:
$$P(X_i = x | X_0 = n) = \sum_{j=0}^n P(X_i=x|X_{i-1}=j) \cdot P(X_{i-1}=j | X_0=n)$$
where $P(X_i=x|X_{i-1}=j)$ is determined by the arbitrary distribution of $X_i$ and $P(X_{i-1}=j | X_0=n)$ is determined in the previous iteration (or base case).
1) What is the name of such a sequence of RV's?
2) Is the method above of calculating $X_n$ a standard technique and does it have a name?
3) Is there a more computationally efficient way to calculate the distribution of $X_n$?
Not every random process has a name. However, in this case, I would just call your process a "Markov chain." I would call your method of computing distributions "recursion" or "an iterative method."
Assuming you have a time-homogeneous Markov chain, one alternative iterative method that can sometimes be much faster is this: Let's suppose the $X_i$ variables all take values in a discrete (preferably finite) state space $S$ for all $i \in \{1, 2, 3, ...\}$.
You start with $P[X_1=a]$ and $P[X_2=b|X_1=a]$ for all $a, b \in S$.
You get $P[X_3=b|X_1=a]$ for all $a,b \in S$ as you did before $$ \underbrace{P[X_3=b|X_1=a]}_{\mbox{2 step}} = \sum_{j \in S} P[X_3=b|X_2=j]P[X_2=j|X_1=a] $$
You get $P[X_5=b|X_1=a]$ for all $a,b \in S$ by: $$ \underbrace{P[X_5 =b|X_1=a]}_{\mbox{4 step}} = \sum_{j \in S} \underbrace{P[X_5=b|X_3=j]}_{\mbox{2 step}}\underbrace{P[X_3=j|X_1=a]}_{\mbox{2 step}}$$
You get $P[X_9=b|X_1=a]$ for all $a,b \in S$ by: $$ \underbrace{P[X_9 =b|X_1=a]}_{\mbox{8 step}} = \sum_{j \in S} \underbrace{P[X_9=b|X_5=j]}_{\mbox{4 step}}\underbrace{P[X_5=j|X_1=a]}_{\mbox{4 step}}$$
and so on, so you can see an exponentially fast progression, rather than a linear progression. Of course, notice that each step of this procedure requires a computation for all $(a,b)$ pairs in $S \times S$, rather than all single $a \in S$. So this method it is not a "clear win." It depends on the size of the (preferably finite) set $S$ and on how far into the future you want to compute probabilities.
Another way of describing the above procedure is to let $P^{(i)}$ be the $i$-step transition probability matrix, and then: \begin{align} P^{(2)} &= P^{(1)}P^{(1)}\\ P^{(4)} &= P^{(2)}P^{(2)}\\ P^{(8)} &= P^{(4)}P^{(4)} \\ P^{(16)} &= P^{(8)}P^{(8)} \end{align} and so on.