Cellular automata (Random walk)

223 Views Asked by At

Here is the context of my question below. I cite from "Some Rigorous Results for the Greenberg-Hastings Model" by Richard Durrett-

Consider the following cellular automaton known as the Greenberg-Hastings Model: The state space is $X=\left\{0,1,2\right\}^{\mathbb{Z}^d}$. Sites $x\in\mathbb{Z}^d$ represent cells which can be excited (1), tired (2), or rested (0). With These interpretations in mind we consider the following deterministic dynamicson $X$. An excited cell is always tired at the next time step. A tired cell always becomes rested. Finally a reted site becomes excited iff at least one of ist 2d neighbors is excited.

Now to the first part I have a question to:

Although our Dynamics are completely deterministic, we can obtain a stochastic process by starting with an Initial probability Distribution on $X$ and letting the System evolve. Let $\eta_n\in X$ denote the state of the process at time $n$, i.e. $\eta_n(x)$ denotes the state of the cell at Location $x$ at time $n$.

I do not see the formal Definition of a stochastic process here: What is the underlying probability space? What are the random variables? What is the state space with $\sigma$-Algebra and so on?

Now another part I do not understand:

Our first step is to investigate the behaviour of $\eta_n^*$, the System starting from a product measure in which the states 0, 1 and 2 each have probability $\frac{1}{3}$.

Again I have formal Problems here: What is the underlying probability space? Which space does have the mentioned product measure? the Probability space? Or the state space?


Now to my main Problem, concerning the proof of this theorem:

Theorem 1. In $d=1$, $\text{Prob}\left\{\eta_n^*(0)=1\right\}\sim\sqrt{2/(27\pi n)}$

What exactly is $\left\{\eta_n^*(0)=1\right\}$? Which measure is meant by $\text{Prob}$?

I do not ask for the whole proof because it is a very Long proof, but for the starting passage of the given proof:

We write $\eta_n$ for $\eta_n^*$. We first Need to construct an auxiliary random walk. The Distribution for the steps of this walk [...] will be the Distribution of $$ 1-\# \text{of }01s\text{ between two successive }10s\text{ when we choose a random element }\eta_0\text{ from }X. $$ Clearly this Distribution is concentrated on $\left\{1,0,-1,-2,...\right\}$.

I do not understand this whole passage. Can you explain it to me, please?

1

There are 1 best solutions below

9
On

That's a common issue concerning probability education: we are often told there we start from some probability space $(\Omega,\mathscr F,\mathsf P)$ but rarely it is specified, which leads to the question like yours. Let me focus on case of discrete-time stochastic processes here, since it is easier and also your case fits there.

One of standard ways to define a stochastic process with a measurable state space $(X,\mathscr B)$ is to say that there exists some probability space $(\Omega,\mathscr F,\mathsf P)$ and consdier a random variable $$ \xi:(\Omega,\mathscr F)\to(X^{\Bbb N},\mathscr B^{\Bbb N}) $$ so that $\xi_n\in X$ is a state taken by the stochastic process $\xi$ at time $n\in \Bbb N$. Now, since $\xi$ is a measurable map, it pushes forward the measure $\mathsf P$ to $\mathsf Q = \xi_*\mathsf P$ defined on $(X^{\Bbb N},\mathscr B^{\Bbb N})$, so that $$ \mathsf Q(B_0\times B_1\times\dots) = \mathsf P(\xi_0\in B_0,\xi_1\in B_1,\dots). $$ This measure $\mathsf Q$ is called the distribution of $\xi$. Notice that to talk about any probabilistic processes of a stochastic process the only thing that you need to know is its distribution. For example, you can always assume that the probability space you've used to define the stochastic process is a canonical probability space given by $\Omega = X^{\Bbb N},\mathscr F = \mathscr B^{\Bbb N}$ and $\mathsf P = \mathsf Q$. In that case your stochastic process $\xi$ is just an identity map $\mathrm{id}_{X^{\Bbb N}}$.

Let's now focus on your case. You have $X = \{0,1,2\}^{\Bbb Z^d}$ and $\mathscr B$ is its powerset. Let $f:X\to X^{\Bbb N}$ be a map that assigns to any initial configuration of cells $x\in X$ the whole trajectory that follows deterministically from it $f(x)\in X^{\Bbb N}$. Now, for any deterministic initial condition $x\in X$ the distribution of your process is $\mathsf Q_x = \delta_{f(x)}$, that is a measure that is simply concentrated on the unique trajectory $f(x)$ originated from $x$. If instead you take initial condition to be some probability measure $\mu$ on $X$ then clearly your distribution is just a convex combination of those delta-distributions given by $\mathsf Q_\mu = \sum_{x\in X}\mathsf Q_x \cdot \mu(x)$. So in the case in the book, $\mathrm{Prob} = \mathsf Q_\mu$ where $\mu$ assigns to each cell its configuration with equal probability, independently of other cells.