Random node observation

39 Views Asked by At

The problem is as follows: In a two dimensional plane, nodes are randomly distributed with intensity $\rho$. Each node in the network swings between two states: available, non-avaialable for exponential period of time with rates $\lambda_{a}$, $\lambda_{ua}$ respectively.

An observer checks if there exist an available node within a radius $r$ at random intervals at randomly chosen point.

Question is what is the probability that an observer does not find an available node within radius $r$? The answer $P_{unav}(r)$ = $e^{-\frac{\lambda_{a}}{\lambda_{a}+\lambda_{ua}}\rho\pi r^2 } $ does not seem to match with simulation results.

I suspect the model is not stationary. Therefore, ensemble avarage of random observation times does not match with the time average of overall average available, and unavailable periods.

I don`t know how to analyze it. I suspect that the model biased but how can i model the bias? I think i should make use of palm distrbution but i dont know how?

Thanks for the help

1

There are 1 best solutions below

2
On

The answer seems correct. I think the simulation should be done randomly in space/time (pop a random observer at the time $t_i$ at a random point in space, or maybe just $t_i = i$).

Keep in mind that the exponential rate is often confused in various software, make sure it's not the average time between switches (which is reciprocal of the rates).

Also, it depends on how large your region is. Bigger region => less biased result.

Maybe you can post your code and people can look at it.