On the distinction between “Pairwise independent” and “Mutually independent” random variables

183 Views Asked by At

I have familiar with the fact that for $N$ events, we have the concepts of pairwise independent and mutually independent. From this I am interested in extensions of this notion to random variables. More explicitly, my question asks the following:

In Probability Theory, do we have the same two concepts for $N$ random variables (instead of events) - ie. pairwise independent vs. mutually independent?

If we have these two concepts, then I have a follow up question on the Central Limit Theorem (CLT). The CLT (in its simplest form) talks about $N$ IID (independent identically distributed) random variables.

What is meant by "independent [...] random variables" in the context of the CLT? - pairwise independent or mutually independent?

This then leads to the question of what happens if we take the weaker requirement of $N$ pairwise independent random variables, does the conclusion of the theorem still hold true? Clearly, if we assume mutual independence, then since pairwise independence is implied, the conclusion must follow.

2

There are 2 best solutions below

0
On

Let's start off with what it means for $2$ random variables to be independent. This will allow us to build our way up to pairwise independence of random variables. From there, we can extend this notion to mutual independence (sometimes called joint independence).

Independent Random Variables: the random variables $X,Y$ on $(\Omega, \mathscr{F}, \mathbb{P})$ are independent if and only if the joint CDF decomposes into the product of the random variables' individual CDFs. In other words independence holds if and only if the following relation also holds: $$ F_{X,Y}(x,y) = F_X(x)F_Y(y)$$ where $F_{X,Y}(x,y) = \mathbb{P}(X \leq x, Y \leq y)$ and $F_X(x) = \mathbb{P}(X \leq x)$

In reality the above is not really a definition. It is more of a criterion for the independence of random variables (which is defined in terms of generated sigma algebras), but for the purposes of this answer the above can be treated as a definition.

Pairwise Independent Random Variables: the random variables $X_1...X_n$ are pairwise independent if every possible pair of random variables $(X_i,X_j)$ where $i \neq j$ and $i,j \in$ {$1...n$} satisfy the definition of independence above.

The concept of mutual (joint) independence extends the idea of our first definition to multiple random variables:

Mutually Independent Random Variables: the random variables $X_1 ... X_n$ are mutually independent if the joint CDF decomposes into the product of the random variables' individual CDFs. In other words independence holds if and only if the following relation also holds: $$F_{\underline{X}}(\underline{x}) = \Pi_{i=1}^{n} F_{X_i}(x_i)$$ where $\underline{X} = (X_1, ... X_n)$ is a vector of random variables; $F_{\underline{X}}(\underline{x}) = \mathbb{P}(X_1 \leq x_1, ... , X_n \leq x_n)$ and $F_{X_i}(x_i) = \mathbb{P}(X_i \leq x_i)$


Now looking at the Central Limit Theorem (CLT), you correctly mention that we need the random variables to be independent and identically distributed. Unless otherwise stated, when we use the word "independent" for multiple random variables, this is shorthand for "mutually (jointly) independent".

If we have pairwise independence as opposed to mutual independence (as you suggest in your question), then this is not sufficient for the CLT to hold without additional assumptions. There are many proofs of the Central Limit Theorem, but in the standard proof, one of the very first steps we use is decomposing the characteristic function into the product of multiple characteristic functions. This is a property of mutual independence which does not hold in the pairwise case.


Note: an interesting property of mutual independence is that any subfamily of these random variables must also be mutually independent. Therefore, any collection of mutually independent random variables will automatically also satisfy pairwise independence (however, the converse does not hold in general).

0
On

The OP asks

In Probability Theory, do we have the same two concepts for random variables (instead of events) - ie. pairwise independent vs. mutually independent?

Yes, and a simple example is of three Bernoulli random variables $X,Y,Z$ with parameter $\frac 12$ and joint pmf $$p_{X,Y,Z}(0,0,1)=p_{X,Y,Z}(0,1,0)=p_{X,Y,Z}(1,0,0)=p_{X,Y,Z}(1,1,1) = \frac 14.$$ It is easily verified that $X,Y,Z$ are pairwise independent. However, they are not mutually independent because $$p_{X,Y,Z}(1,1,1) = \frac 14 \neq p_X(1)p_Y(1)p_Z(1) = \frac 18.$$

If you feel that this is too close to a similar example expressed in terms of events $A =(X=1)$,$B=(Y=1)$, $C=(Z=1)$ of probability $\frac 12$, consider three standard normal random variables $X,Y,Z$. Now suppose that their joint density $f_{X,Y,Z}(x,y,z)$ is not $\phi(x)\phi(y)\phi(z)$ where $\phi(\cdot)$ is the standard normal density (as would be the case if $X,Y,Z$ were mutually independent standard normal random variables), but rather

$$f_{X,Y,Z}(x,y,z) = \begin{cases} 2\phi(x)\phi(y)\phi(z) & ~~~~\text{if}~ x \geq 0, y\geq 0, z \geq 0,\\ & \text{or if}~ x < 0, y < 0, z \geq 0,\\ & \text{or if}~ x < 0, y\geq 0, z < 0,\\ & \text{or if}~ x \geq 0, y< 0, z < 0,\\ 0 & \text{otherwise.} \end{cases}\tag{1}$$ Note that $X$, $Y$, and $Z$ are not a set of three jointly normal random variables (that is, they don't have a multivariate normal distribution) but it can be shown that any two of these is indeed a pair of independent standard normal random variables. So this is another example of the difference between pairwise independence and mutually independent random variables. Mutual independence implies pairwise independence but pairwise independence does not imply mutual independence.