Random process, stochastic process explained intuitively?

1.9k Views Asked by At

So I've read the definitions online and this is what I understood.

$X(t)$ is a random process for $t>0$ and we can think of it as being a random variable at any given time $t=t_0$.

For example, $X(t_0)$ is a random variable while $X(t)$ is a random process.

So a definition of a random process COULD be:

\begin{align*} Let \space X(t) = \begin{cases} 1 & \mbox{w.p. } 1/t\\ 4 & \mbox{w.p. } 1-1/t \\ \end{cases} \end{align*}

Is this correct? Please give me more intuition as to what a random process is

Thank you so much!

2

There are 2 best solutions below

0
On

A random process is a collection of random variables $(X(t))_{t\in T}$ indexed by a set $T$ and defined on a common probability space $(\Omega,\mathcal F,P)$.

A significant step in the direction of a correct understanding of what a random process is would be to stop confusing a random variable $X(t)$ (for some given $t$) with a process $(X(t)_{t\in T}$.

In the example, to provide the marginal distribution of $X(t)$ for each $t\geqslant 1$ is not nearly enough to indicate the joint distribution of the process $(X(t))_{t\geqslant1}$.

For example, the identities $X(t)=1+3\mathbf 1_{U\gt1/t}$ for every $t\geqslant1$, for some fixed $U$ uniformly distributed on $(0,1)$, or $X(t)=1+3\mathbf 1_{U(t)\gt1/t}$ for every $t\geqslant1$, for some i.i.d. process $(U(t))_{t\geqslant1}$ uniformly distributed on $(0,1)$, define two very different processes $(X(t))_{t\geqslant1}$ which both satisfy your condition that the marginal distribution of each $X(t)$ is $\frac1t\delta_1+(1-\frac1t)\delta_4$. The probability of the event $[X(2)=X(3)=1]$ is $\frac13$ in the first case and $\frac16$ in the second case.

0
On

The theory that's being explained in what you've read is similar to an that underlying an area of stocahastic modelling I've studied, which is used in financial maths (apparently mostly, although arguably not always with great results): consider $f(x) = x^2$. Since you can calculate the value of f(x) for a given value of x with 100% certainty, there's no randomness involved. f(x) is a variable. It varies according to the value of x, but at any given point x, there's no variance at all: just one point. Every time mathematical modelling is used to model a situation in the physical sciences using equations with no randomness, the same results are expected every time: in this case every time the function f is evaluated at a given point x. This isn't true with random proceseses. The theory is that you introduce random elements into an equation. For example you could have the equation X(t) = a + b$\epsilon \sqrt t$, where $\epsilon$ is a random drawing from a standard normal distribution and a and b are constants. The theory (supposedly) proves that some equations of this type have a mean and a variance (poulation mean and variance) which can be found analytically, so at any point $t_n$ you would have a range of values with a mean and variance (sample mean and variance), so assuming the theory is correct, it does make sense to view $X(t)$ at a given time t as a random variable. I think the theory goes on to say that for ALL equations of this type, if you simulate X(t) (plot what it might be) for $t_0\le t \le t_n$ (which can be done by taking n random drawings from a standard normal distribution to give the n values of $\epsilon$ needed to evaluate the function at each point in time), it's true that the more simulations you produce, the closer the mean and variance of this set of simulations will be to the true mean and variance and hence the closer the value of X(t) at any given time t predicted by the model will be to the true value. This method is called Monte Carlo Simulation.

In a nutshell though, I think it would help you to realise that you can visualise a random process as nothing more than a graph of X(t) against t, for all possible t.