Velocity from Probabilities in a Random Walk with One Step Memory

275 Views Asked by At

We consider a discrete space, continuous time random walk on the integer line where the walker hops from a lattice site to a nearest neighbor site. Suppose the walker has one step memory and it remembers whether it arrived to its location by jumping to the right or left. Here, we can define four probabilities which describe our process: $P(rr)$ := probability that the walker jumps to the right given that its previous jump was to the right. $P(rl)$ := probability that the walker jumps to the right given that its previous jump was to the left. $P(lr)$ and $P(ll)$ can be analogously defined.

With this information, can we comment on the average velocity of the walker?

The answer would be proportional to the difference between probability of hopping to the right and to the left. But I'm unable to incorporate memory in this argument.

Let me know if I was unclear in my question. Any help in the right direction is appreciated :)

2

There are 2 best solutions below

0
On BEST ANSWER

The four transition probabilities constitute a transition matrix between the two velocity states (going left and going right). Let $u_t$ be the probability of going left at time $t$, and $v_t$ be the probability of going right; then $$ \left(\begin{matrix}u_{t+1} \\ v_{t+1}\end{matrix}\right)=\left(\begin{matrix}P_{ll} & P_{lr} \\ P_{rl} & P_{rr}\end{matrix}\right) \left(\begin{matrix}u_{t} \\ v_{t}\end{matrix}\right)=\left(\begin{matrix}P_{ll} & 1-P_{rr} \\ 1-P_{ll} & P_{rr}\end{matrix}\right) \left(\begin{matrix}u_{t} \\ v_{t}\end{matrix}\right), $$ using the fact that $P_{ll}+P_{rl}=P_{rr}+P_{lr}=1$. The eigenvalues of the matrix are found to be $1$, with corresponding eigenvector $$u_*=\frac{1-P_{rr}}{(1-P_{rr})+(1-P_{ll})},\;v_*=\frac{1-P_{ll}}{(1-P_{rr})+(1-P_{ll})},$$ and $P_{ll}+P_{rr}-1$, with corresponding eigenvector $u=-1/2, v=1/2$. Except in the special cases where either $P_{ll}=P_{rr}=0$ (always change directions, so average velocity is zero), or $P_{ll}=P_{rr}=1$ (never change directions, so velocity is fixed at its initial value), you have $(u_t,v_t)\rightarrow(u_*, v_*)$ as $t\rightarrow\infty$. The average velocity is $$ v_*-u_*=\frac{P_{rr}-P_{ll}}{(1-P_{rr})+(1-P_{ll})}. $$

0
On

This is not an answer, just how I would approach the problem. It could be totally wrong, as I generally get Markov Chains wrong before I get them right... Hopefully it gives you an idea at least.

I would first start by clearly defining the set of States that your Markov Chain can be in. There are perhaps a few different ways you can define this. Generally the State set is denoted by S.

It is tempting to use the four states you have provided: S = {RR, RL, LR, LL}, but I think a few more are required. Firstly we need to start somewhere, our initial state. Let's call this state O (O for origin). We also need two states that we can transition to from our origin, lets call these LO and RO (jumps to the left from the origin and jumps to the right from the origin).

So we have S = {O, RO, LO, RR, RL, LR, LL}.

Now we need to visualise how these states interact with one another. This is usually done with a Graph (nodes and directed edges). For each state in our State set we will draw a node. Then we will draw directed edges between nodes, these correspond to state transitions.

I'm no good with Tex so I'll upload an image of what I think this graph should look like Click here to see my poor paint drawing

Now that we have decided on an appropriate set of states, and have visually represented the possible transitions between these states we can define a transition (rate) matrix. I'm not going to create a matrix, instead ill just continue with how I would approach the problem.

I do not any reason why we cannot assume that the transitions between states happen in some finite amount of time and are exponentially distributed with some rate. As this result depends on the "memorylessness" of the Markov Chain, see http://u.math.biu.ac.il/~amirgi/CTMCnotes.pdf example 6.1.1. As far as I am concerned the rate of transitions between states in your Markov Chain should not depend on the history of the Markov Chain. This is because you've defined your states carefully enough to take care of the "memory" dependency. I COULD BE WRONG THOUGH!

I'm a little unsure how to define velocity in this question, but I think you could generate the Mean Occupancy Time Matrix (depending on the prescribed transition rates), and then say your velocities in each state are inversely proportional to these values for each state. So you will end up with some kind of "Velocity Matrix".

Then, as velocity has a direction, you would say if you are in RR you are moving to the right, in LL you are moving to the left, in LR moving to the right etc... So if you let right be positive and left be negative you should be able to take the values in your "Velocity Matrix" for RR and add them to RL then subtract away LR added to LL to give you some overall indication as to whether you are moving left or right and how "fast".

TLDR: decide on your states, draw a state graph, assume that the time taken for a transition is exponentially distributed, create a transition matrix, determine the Mean Occupation Time Matrix, Use the inverse values of this matrix to define something that is some kind of velocity.