Providing Example that independence of events $A, B$ is dependent on Probability measure

1k Views Asked by At

I have just started probability and I am rather confused at the notion of probability measure $P$ and how it essentially differs from the distribution of a random variable $X$ under $P$

First, I need an explanation:

In most textbooks, it is stated for a random variable $X$ from a Probability space $(\Omega, \mathcal{F}, P)$ to an event space $(\Omega',\mathcal{F}')$ then $P'(A'):=P_{X}(A')=P(X^{-1}A')$ is a probability measure, namely the distribution of a random variable $X$ under $P$. But how is this any different from the original Probability measure $P: \mathcal{F} \to [0,1], A \mapsto P(A)$, which is simply mapping subsets of $\Omega$ onto $[0,1]$. Is it a different $P'$ probability measure altogether? I mean surely it depends on the map of our previous probability measure $P$? What is the intuition behind this rather opaque version of defining the probability with respect to a random variable?

My example for independence:

Let there be $3$ $6-$sided dice with each dice taking on either colour blue, green or red.

Let event $A:=$ the dice selected is red.

and event $B:=$ the dice selected and then thrown shows $2$.

Let's then take $2$ probability measures $P, Q$. We set $P$ equal to uniform distribution.

It follows that $P(A \cap B) = \frac{1}{6 \times 3}=\frac{1}{18}$

and $P(A) \times P(B)=\frac{1}{3} * \frac{1}{6} = \frac{1}{18}$, so $A, B$ are naturally independent.

Now I am attempting to find a probability measure $Q$ such that:

$Q(A \cap B) \neq Q(A) \times Q(B)$

My idea: perhaps $Q(C)=\frac{2|\Omega|-|C|}{|\Omega|}$. But how can I be sure that it is $\sigma-$additive, it is clearly "normed" as $Q(\Omega)=1$.

$Q(A \cap B)= \frac{2 \times 18-1}{18}>1$, which is not true for a probaility measure, right? Is my example wrong? Can you provide me with more intuitive examples?

2

There are 2 best solutions below

5
On BEST ANSWER

The simplest example might be when $\Omega=\{1,2,3,4\}$, $\mathcal F=2^\Omega$, $A=\{1,2\}$, $B=\{1,3\}$.

Then, the events $A$ and $B$ are independent under the uniform measure $P$ but not under the measure $Q$ defined by $Q(\{1\})=\frac12$ and $Q(\{2\})=Q(\{3\})=Q(\{4\})=\frac16$.


Regarding the explanation you demand, simply note that, in the context you describe, $P$ is a measure on $\mathcal F$ while $P'=P_X$ is a measure on $\mathcal F'$. The most common case might be when $(\Omega',\mathcal F')=(\mathbb R,\mathcal B(\mathbb R))$ and $X:\Omega\to\mathbb R$. Then the distribution of $X$ is a measure on the sigma-algebra $\mathcal B(\mathbb R)$ on the target set $\mathbb R$, and certainly not a measure on the sigma-algebra $\mathcal F$ on the source set $\Omega$.

To sum up, in most cases, $P$ and $P'$ are probability measures on different spaces, hence one definitely cannot use one for the other.

0
On

We can take our probability space to be $\Omega = \{b,g,r\}\times \{1,2,3,4,5,6\}.$ Define $Q(\{(r,1)\}) = 1/36,$ $ Q(\{(r,2)\}) = 1/12,$ and $Q(\{(x,y)\}) = 1/18$ for all other $(x,y)\in \Omega.$ For any $E\subset \Omega,$ we then set

$$Q(E)= \sum_{(x,y)\in E}Q(\{(x,y)\}).$$

Then $Q$ is a probability measure on $\Omega.$

But note $Q(A)=1/3$ and

$$Q(B)= Q(\{(b,2),(g,2),(r,2)\}) = 1/18+1/18+1/12=7/36.$$

However

$$Q(A\cap B) = Q(\{(r,2)\}) = 1/12 \ne Q(A)\cdot Q(B) =(1/3)\cdot(7/36).$$