Finding the joint distribution of a random process with memory

486 Views Asked by At

I'm modeling a digital system as a random process and attempting to solve for the autocorrelation in order to arrive at the power spectral density of the process. The system is as follows:

  • At any step k, one of four networks n0 - n3 is active
  • When network n2 is active, the output is x2 (some integer value)
  • Initially, the probability that any network is active is equal (1/4)
  • The rule is that a network can not be active the next two cycles after it is selected
  • Besides this rule, all networks are equally likely to be selected
  • Hence n0 - n1 - n2 - n0 is a valid sequence but n0 - n1 - n0 - n2 is not

Now I'm using MATLAB so I have been able to model this process (call it X[k]), verify it's correct, and plot the autocorrelation (lets call it R[n]) and PSD.

(IMAGE WOULD GO HERE IF I COULD POST IT)

This is using: x0=200, x1=-100, x2=400, x3=-500 (zero mean)

That is all well and good, but it's not very satisfying. I'd like to be able to arrive at R[n] theoretically as well.

Using the definition, $R[n] = E[X[k]X[k+n]] = \sum_{i=0}^{3}\sum_{j=0}^{3}X_iX_jp_{X_iX_j}$

R[0] is simply the second moment of the RV X or 115000 with the numbers above, which matches MATLAB. Simple enough.

R[1] requires $p_{X_0X_1}$ to compute. Intuitively it is clear that the probability of X[1] = X[0] is 0 and otherwise the probabilities are equal, so 1/12 for the rest. This gives us R[1] = -38333 using the above numbers, which also matches MATLAB. The same idea gives us R[2] = R[1] = -38333.

Now when we get to R[3] computing $p_{X_0X_3}$ becomes trickier. My initial thought was that they would be independent at this point since the rule only has a "depth" of 2 samples. But after thinking about it, if X[0] = x0 and we want to know probability of X[3], then it has a 50% chance of being x0 and a 50% chance of being "other" - x1/x2/x3. Then for $p_{X_0X_3}$ the value of $X[0] = x_i, X[3] = x_i$ would be 0.5*1/4 = 1/8 (on the diagonal), and the other values would be 1/24 since they are equally likely. This gives a value for R[3] of +38333 which also agrees with MATLAB. Huzzah!

For R[4] and $p_{X_0X_4}$ I again looked at the "diagonal" $X[0] = x_i, X[4] = x_i$ and said OK, since in the previous step there was a 50% chance that $X[0] = x_i, X[3] = x_i$, there was a 50% chance that it was not equal, and then a 50% chance from there that the new value at X[4] would be equal. This gives me $p_{X_0X_4}$ where all the values are equal, 1/16, hence R[4] = 0.

Now at this point I'm a little stuck on how to compute the remaining values. However I think one idea that works is to draw a Trellis diagram of the four states vs the index k and the transitions available and their probability. I think that the joint pmf's can be calculated by looking at all the paths that connect X[0] = x0 to X[5] = x0, for example. For each path, multiply out the probabilities for the path to give the probability of the sequence ie x0-x1-x2-x3-x1-x0 would be 1/4*1/3*1/2*1/2*1/2*1/2. Then sum all of the path probabilities that start with x0 and end with x0. Then do this for all values of X[0] and X[5].

I believe this works however it does not seem very efficient. I've been trying to come up with an easier way to compute the joint pmfs for some values of k but I haven't been able to get anywhere. Any ideas or help would be much appreciated.

1

There are 1 best solutions below

4
On BEST ANSWER

Let $a_1 = 200$, $a_2=-100$, $a_3=400$, $a_4=-500$. We have a (second order) Markov process $X_t=a_i$ with $i$ taking values equiprobably in two possible values that verify $X_t \ne X_{t-1}$ and $X_t \ne X_{t-2}$

The problem is quite simplified by the fact that $\sum a_i=0$ and the (intuively obvious and provable via Markov chain theory) that in equlibrium the states are equiprobable.

Now, I claim: Then $r_k=E(X_t \, X_{t-k}) = \frac{4 c_k -1}{3} \, \sigma_x^2$, where $c_k = P(X_t = X_{t-k})$ and $\sigma_x^2 = \sum \frac{1}{4} a_i^2 = 115000$

Proof: $$r_k=E(X_t \, X_{t-k}) = E(X_0 \, X_k)=\sum_i \sum_j a_i a_j P(X_0 = a_i ; X_k = a_j)=\\=\sum_i a_i P(X_0 = a_i) \sum_j a_j P(X_k=a_j|X_0=a_i) \tag{1}$$

We separate in the inner sum the term with same index as the outer sum

$$\sum_j a_j P(X_k=a_j|X_0=a_i)= a_i P(X_k=a_i|X_0=a_i) + \sum_{j\ne i} a_j P(X_k=a_j|X_0=a_i)=\\=a_i c_k + (-a_i) \frac{1- c_k}{n-1} =a_i \frac{( n \, c_k -1)}{n-1} \tag{2}$$

where $n=4$ in our case. Above we've used the facts: $P(X_k=a_i|X_0=a_i)=c_k$ (probability that the same networks is selected, $k$ steps apart) does not depend on $i$; $P(X_k=a_j|X_0=a_i)$ (once fixed $i$ and $k$), is constant over all the $n-1$ values ${j\ne i}$, hence its value is $(1-c_k)/(n-1)$

Then

$$r_k=\sum_i a_i P(X_0 = a_i) a_i \frac{ n\, c_k -1}{n-1} = \frac{ n\, c_k -1}{n-1} \frac{\sum a_i^2}{n} = \frac{ 4 c_k -1}{3} \sigma_X^2 \tag{3}$$

Now we are left with calculation of $c_k$. By the symmetry of the problem, this is equivalent of computing the probability that $p(X_k=a_1| X_0=a_1)$ It's obvious that $c_k=1$ $c_k=1$ for $k=0$, $c_k=0$ for $k=1,2$ For larger values of $k$, the probability can be computed considering a Markov chain with three states, with the following transition matrix:

$$ M=\pmatrix{ 0 & 1 & 0 \\ 0& 0 & 1 \\1/2& 0& 1/2}$$

The state vector, for each time step $k$, correspond to the three possibilites: $\{X_k=a_1; X_k=a_1;X_k\ne a_1 \cap X_{k-1} =a_1; \textrm{elsewhere}\}$

That is:

  • state $0\equiv$ $a_1$ has just happened, it won't happen again in next two steps.
  • state $1\equiv$ $a_1$ has happened in the previous, it won't happen again in next step.
  • state $2\equiv$ $a_1$ has not happened in this or the previous, it can happen with probability $1/2$ in the next, or elsewhere we'll return to this state.

The desired probability $c_k$ is the probability that, starting from state 0, we are revisiting it at step $k$. Then,

$$c_k = M^{[k]}_{0,0} \hspace{1cm} M^{[k]}=\pmatrix{ 0 & 1 & 0 \\ 0& 0 & 1 \\1/2& 0& 1/2}^k$$

k   ck        r
0   1       115000
1   0       -38333
2   0       -38333
3   0.5      38333
4   0.25         0 
5   0.125   -19166
6   0.3125    9583
7   0.28125   4791
8   0.20313  -7186
9   0.25781   1197
10  0.26953   2994