Let $X_i$ be the node visited by a random walk at step $i$, and the following equations be the transition probabilities.
$Pr(X_n = x_n | X_{n-1} = x_{n-1}, \cdots, X_1 = x_1) = Pr(X_n = x_n | X_{n-1} = x_{n-1}, \cdots, X_{n-m} = x_{n-m}$),
where $x_i \in V = \{\text{the set of all nodes\}}$ and $1 \leq m < n$ is the order of the chain.
Then, assume that we have 3 nodes(A,B,C) and obtain the following sequence as a result of a random walk algorithm:
$A \rightarrow C \rightarrow B \rightarrow C \rightarrow B \rightarrow A \rightarrow A \rightarrow B \rightarrow C \rightarrow \cdots$
That is, after all three nodes are visited, a node in $V$ can be revisited. (In other words, the random walk uses a memory until all three nodes are visited). As shown in this sequence, it intuitively seems to have the positive recurrent property.
Finally, I would like to say that the random walk has a limiting distribution since it is a positive recurrent Markov chain. However, the problem is that the random walk has dynamic higher orders, so that we cannot call it a Markov chain.
All the books or documents what I have studied are dealing with Markov chains with first order or higher order that is fixed, in order to say a limiting stationary distribution.
Is there any way to assert that the random walk above has a limiting distribution?
You may change the problem to a Markov chain problem by adding states: Now you have states, $(A), (B),(C),(A,B),(A,C),(B,C),(A,B,C)$, and then just make sure that you only have the following non-zero-probability transitions:
$(A)\to (A,B)$
$(A)\to (A,C)$
$(B)\to (A,B)$
$(B)\to (B,C)$
$(C) \to (A,C)$
$(C) \to (B,C)$
$(A,B)\to (A,B,C)$
$(A,C) \to (A,B,C)$
$(B,C) \to (A,B,C)$
$(A,B,C)\to (A)$
$(A,B,C)\to (B)$
$(A,B,C)\to (C)$
For limiting distribution, you may want to make the additional states to be order-sensitive, for example $(A,B)$, $(B,A)$ are considered different states. In the end, the limiting distribution on the original $A,B,C$ states can be calculated from the limiting distribution on the enlarged set of states. For example, probability on $A$ will be the sum of probabilities on $(A),(B,A),(C,A),(B,C,A),(C,B,A)$