Can random variables be conditioned in a cycle?

120 Views Asked by At

Given three random variables $A$, $B$, and $C$, can we in general reason about these if $P(A = a\mid B = b)$, $P(B = b\mid C = c)$, and $P(C = c\mid A = a)$ all (non-trivially) hold/are not equal to their unconditional counterparts? (I.e. $A$, $B$, and $C$ are conditional on each other in a cycle.)

If so, does this change if the three random variables are over disjoint, potentially binary/boolean outcome spaces?

If not, does it even make sense to write such relationships between conditional variables? Is there anything fundamental that stops us making such definitions?

Such reasoning seems difficult/infeasible in general as finding the probability of, say, $P(A=a)$, requires reasoning possibly (almost always?) to infinity.

A further, and generalised, question, which I assume has the same answer, is whether the conditionality of random variables must form a directed acyclic simple graph; relevant to my context of Bayesian networks.

Edit: As an example, consider three random variables (or possibly events?) $A$, $B$, and $C$ which are each binary. Suppose

  • $P(A=T\mid B=T) = 0.1$ and $P(A=T\mid B=F) = 0$,
  • $P(B=T\mid C=T) = 0.1$ and $P(B=T\mid C=F) = 0$, and
  • $P(C=T\mid A=T) = 0.1$ and $P(C=T\mid A=F) = 0$.

It seems to me that actually $P(A=T)=0$ by some form of limit reasoning. But perhaps my question doesn't make sense?

3

There are 3 best solutions below

0
On

First, concretely about your example: The three probabilities on the right effectively say that $A$, $B$ and $C$ are logically equivalent. Interpreting $T$ as true and $F$ as false, we could write them as $\neg B\Rightarrow\neg A$, $\neg C\Rightarrow\neg B$ and $\neg A\Rightarrow\neg C$, respectively. Thus $A$, $B$ and $C$ are either all $F$ or all $T$. But then the probabilities on the left are all $1$ (or undefined, if they're never $T$), so the scenario you describe is impossible.

Now about your more general confusion: You seem to be thinking of conditional probabilities as something one-directional, inherently asymmetrical, something like causality perhaps. They're not, and thus there's no reason why they should be acyclic.

As an example, consider three coins that never all show the same side, whereas all $6$ results where they don't all show the same side are equiprobable. Let $A$, $B$, $C$ be the respective events that they show heads. Then

$$P(A)=P(B)=P(C)=\frac12$$

whereas

$$ P(A\mid B)=P(B\mid C)=P(C\mid A)=\frac13\;. $$

Or consider three switches of which exactly one is on, all with equal probability, and let $A$, $B$, $C$ be the respective events of the switches being on. Then

$$P(A)=P(B)=P(C)=\frac13$$

whereas

$$ P(A\mid B)=P(B\mid C)=P(C\mid A)=0\;. $$

(You said you weren't interested in only two variables or events, otherwise we'd even have had $P(A)=P(B)=\frac12$ with $P(A\mid B)=P(B\mid A)=0$ in both cases.)

There's nothing mysterious about this; in either case the three events are entirely symmetrical, they just happen to be dependent, and there's no reason why this dependence shouldn't result in all the conditional probabilities being different from their unconditional counterparts.

0
On

After some clarification, you seem to be asking if it's possible for three events $A,B,C$ to be dependent on one another in a nontrivial way (i.e. none of the probabilities or conditional probabilities are zero or one.)

This is possible. For instance let $Z\in \{H,T\}$ be a fair coin toss, and then let $A$ $B$ and $C\in \{H,T\}$ be three independent coin tosses of a weighted coin whose weight is determined by $Z$: if $Z=H$, then probability of heads is $3/4,$ and if $Z=T,$ it is $1/4.$ It is clear by symmetry that we have $P(A=H)=P(B=H)=P(C=H)=1/2.$ But, it should also be clear from intuition that $P(A=H\mid B=H) > 1/2$ (since if $B=H$ it's more likely that the outcome of the first toss was $Z=H,$ and thus more likely that $A=H$), and similarly for all the other conditionals.

We can calculate all this as well. We have $$ P(A=H) = \frac{1}{2}P(A=H\mid Z=H) + \frac{1}{2}P(A=H\mid Z=T) = \frac{1}{2}\frac{3}{4} + \frac{1}{2}\frac{1}{4} =\frac{1}{2} $$ as expected (and same for $P(B)$ and $P(C)$). And then $$ P(\{A=H\}\cap \{B=H\}) = \frac{1}{2}P(\{A=H\}\cap \{B=H\}\mid Z=H) + \frac{1}{2}P(\{A=H\}\cap \{B=H\}\mid Z=T) \\= \frac{1}{2}\frac{3}{4}\frac{3}{4}+ \frac{1}{2}\frac{1}{4}\frac{1}{4}=\frac{5}{16},$$ so $$ P(A=H\mid B=H) = \frac{P(\{A=H\}\cap \{B=H\})}{P(B=H)} = \frac{5/16}{1/2} = 5/8$$ which is greater than $1/2$ as expected. The exact same calculation would give the same answer for $B=H$ conditional on $C=H$ and $C=H$ conditional on $A=H.$

0
On

Your original proposal is essentially impossible. To simplify writing, let $a=(A=T)$, etc. Then the first statement is $P(a|b)=0.1$ and $P(a|b')=0.0$. These can be rewritten as $P(a\cap b)=0.1P(b)$ and $P(a\cap b')=0$. Since, in general $P(a)=P(a\cap b)+P(a\cap b')$, using $P(a\cap b')=0.0$ we have $P(a\cap b)=P(a)=0.1P(b)$. Putting this together with the other two statements we get $P(a)=0.1P(b)=0.01P(c)=0.001P(a)$. This can be true only in the trivial case when $P(a)=P(b)=P(c)=0$.

In your original example you concluded that $P(A=T)=0.0$ is the only thing that makes sense. You are right.