Density function of an (iid) stochastic process

1.1k Views Asked by At

Let $\{X_t \}$ be a continuous stochastic process (say $t \in [0,1]$), where the random variables are iid. For example, we can suppose the $X_t$'s are iid normal distribution with mean = 0, and variance = 1. Is there a formal way to express the probability density function of this process?

(Here, I am mainly interested in the case where the time has completely elapsed, so presumably no need to care about filtrations. I am interested in assessing the probability that a realization of the process has some property. As an example, in the above case, the probability that a realization of the problem is always bigger than zero should be zero (this is a hunch); in particular, for any tiny little interval, $[0,\epsilon]$, the probability that the process is bigger than zero over that interval is zero, at least this is my hunch.)

The answer to this problem over finite sets is well known, of course. For a finite case (say $t \in \{1, \ldots, n \}$) if $f_t$ is the probability density function of $X_t$, then the probability density function of the whole process is $\prod_{i=1}^n f_i$. Here, the sample space is $\mathbb{R}^n$. In the case where $t \in [0,1]$, the sample space is a functional space (is there one that is commonly used? $L^p$ for some $p$? All continuous functions? All measurable functions?). Any reference would be very helpful.

(Fixed text to account for zhoraster comment)

1

There are 1 best solutions below

0
On

"Probability density function" as a concept only really works when we have a random variable defined on Euclidean space, i.e. a single real-valued random variable or the joint law of finitely many. There is the concept of density with respect to a measure, but there isn't a canonical measure on $\mathbb R^{[0,1]}$ (the space of functions from $[0,1]$ to $\mathbb R$, which is where your process lives) in the way that there is on $\mathbb R^n$. For infinitely many random variables, the best we have is Kolmogorov's theorem which tells us that the joint law is uniquely given by the finite dimensional distributions: for any $t_1,\ldots,t_n\in[0,1]$, we have

$$\mathbb P(X_{t_1}\le x_1,\ldots,X_{t_n}\le x_n)=\prod_{i=1}^n\left(\frac1{\sqrt{2\pi}}\int_{-\infty}^{x_i}e^{-t^2/2}dt\right).$$