Independence of sigma-algebras

327 Views Asked by At

Good day to everyone.

While solving some problem of studying character I obtained some statement to prove, which is like following (this is my internal interest to prove it rigorously).

Assume that $X_1, \ldots, X_k, X_{k+1}, \ldots, X_\ell, X_{\ell + 1}, \ldots, X_n$ are i.i.d. random variables of unknown distribution ($1 \le k < \ell < n$). See also EDIT1 for crucial assumtions.

What is really intuitively understandable, but pretty unclear how to write down, is the next statement: prove that the events $$ \max(X_1, \ldots, X_k) \le \max(X_{k+1}, \ldots, X_\ell) $$ and $$ \max(X_1, \ldots, X_{\ell}) \le \max(X_{\ell+1}, \ldots, X_n) $$ are independent.

Qualitatively and intuitively it's ok: the second one "forgets" about the configuration of the maximums in $X_1, \ldots, X_\ell$.

It's also pretty clear how to depict it "with picture": the algebra of events of the probability subspace of the first $\ell$ coordinates generates the cylindric algebra, which is subalgebra of events of the whole ($n$-coordinate) probability space.

Somehow I have the feeling that conditioning on such cylindric subalgebra should not change the distribution, but I have a lack of imagination (and knowledge of the results) now, to get something out of that.

Can anybody explain that? Thanks.

EDIT1 Due to the comments I got a clear understanding that this is not true in general, so assume that the distribution is continuous, according to the comments. Is it possible to say anything in this case?

2

There are 2 best solutions below

6
On BEST ANSWER

Not sure the result holds...

Try $(k,\ell,n)=(1,2,3)$ then the events are $A=[X_1\leqslant X_2]$ and $B=[\max(X_1,X_2)\leqslant X_3]$. Assume that the random variables $X_k$ are Bernoulli with parameter $p$, that is, $P[X_k=1]=p$ and $P[X_k=0]=1-p$.

Then $P[A]=(1-p)+p^2$, $P[B]=(1-p)^2+(1-(1-p)^2)p$, and $A\cap B=[X_1\leqslant X_2\leqslant X_3]$ hence $P[A\cap B]=(1-p)((1-p)+p^2)+p^3$. Since $P[A]$, $P[B]$ and $P[A\cap B]$ are polynomials in $p$ of degree $2$, $3$ and $2$ respectively, the identity $P[A\cap B]=P[A]\cdot P[B]$ cannot hold for every $p$ in $[0,1]$.

To be specific, assume that $p=\frac12$, then $(X_1,X_2,X_3)$ is uniformly distributed on the cube $\{0,1\}^3$, $P[A\cap B]=\frac12$ and $P[A]\cdot P[B]=\frac34\cdot\frac58\ne\frac12$.

However, if the common distribution is continuous, then the result holds when $(k,\ell,n)=(1,2,3)$ since, by exchangeability of the random vector $(X_1,X_2,X_3)$ and because ties have probability zero, then $P[A]=\frac12$, $P[B]=\frac13$ and $P[A\cap B]=\frac16$.

In the general continuous setting, write $A=[M_1\lt M_2]$, $B=[\max(M_1,M_2)\lt M_3]$ and $A\cap B=[M_1\lt M_2\lt M_3]$, with hopefully obvious notations. Then $A$ means that the record amongst the $\ell$ first random variables happens in the $\ell-k$ last ones. Thus, $P[A]=(\ell-k)/\ell$. Likewise $P[B]=(n-\ell)/n$. To study $A\cap B$, condition on $B$ and draw one by one the ranks of the sample $(X_k)$ in decreasing order. The first rank is in the $(\ell,n]$ range, by hypothesis. The first rank not in the $(\ell,n]$ range is uniform in $[1,\ell]$, by exchangeability. Note that $A$ happens if and only if this rank is in $(k,\ell]$ hence $P[A\mid B]=(\ell-k)/\ell$. QED.

2
On

Take $X,Y,Z$ i.i.d with Bernoulli(0.5) distribution. Then $P(X \leq Y) = \frac{3}{4}$ and $P(\max(X,Y)=1) = \frac{3}{4}$ so $P(\max(X,Y) \leq Z) = \frac{5}{8}$. But $$ P(X\leq Y, \max(X,Y) \leq Z) = P(X \leq Y, Y \leq Z) = \frac{1}{2} \neq \frac{15}{32}. $$