Given three random variables $A$, $B$, and $C$, can we in general reason about these if $P(A = a\mid B = b)$, $P(B = b\mid C = c)$, and $P(C = c\mid A = a)$ all (non-trivially) hold/are not equal to their unconditional counterparts? (I.e. $A$, $B$, and $C$ are conditional on each other in a cycle.)
If so, does this change if the three random variables are over disjoint, potentially binary/boolean outcome spaces?
If not, does it even make sense to write such relationships between conditional variables? Is there anything fundamental that stops us making such definitions?
Such reasoning seems difficult/infeasible in general as finding the probability of, say, $P(A=a)$, requires reasoning possibly (almost always?) to infinity.
A further, and generalised, question, which I assume has the same answer, is whether the conditionality of random variables must form a directed acyclic simple graph; relevant to my context of Bayesian networks.
Edit: As an example, consider three random variables (or possibly events?) $A$, $B$, and $C$ which are each binary. Suppose
- $P(A=T\mid B=T) = 0.1$ and $P(A=T\mid B=F) = 0$,
- $P(B=T\mid C=T) = 0.1$ and $P(B=T\mid C=F) = 0$, and
- $P(C=T\mid A=T) = 0.1$ and $P(C=T\mid A=F) = 0$.
It seems to me that actually $P(A=T)=0$ by some form of limit reasoning. But perhaps my question doesn't make sense?
First, concretely about your example: The three probabilities on the right effectively say that $A$, $B$ and $C$ are logically equivalent. Interpreting $T$ as true and $F$ as false, we could write them as $\neg B\Rightarrow\neg A$, $\neg C\Rightarrow\neg B$ and $\neg A\Rightarrow\neg C$, respectively. Thus $A$, $B$ and $C$ are either all $F$ or all $T$. But then the probabilities on the left are all $1$ (or undefined, if they're never $T$), so the scenario you describe is impossible.
Now about your more general confusion: You seem to be thinking of conditional probabilities as something one-directional, inherently asymmetrical, something like causality perhaps. They're not, and thus there's no reason why they should be acyclic.
As an example, consider three coins that never all show the same side, whereas all $6$ results where they don't all show the same side are equiprobable. Let $A$, $B$, $C$ be the respective events that they show heads. Then
$$P(A)=P(B)=P(C)=\frac12$$
whereas
$$ P(A\mid B)=P(B\mid C)=P(C\mid A)=\frac13\;. $$
Or consider three switches of which exactly one is on, all with equal probability, and let $A$, $B$, $C$ be the respective events of the switches being on. Then
$$P(A)=P(B)=P(C)=\frac13$$
whereas
$$ P(A\mid B)=P(B\mid C)=P(C\mid A)=0\;. $$
(You said you weren't interested in only two variables or events, otherwise we'd even have had $P(A)=P(B)=\frac12$ with $P(A\mid B)=P(B\mid A)=0$ in both cases.)
There's nothing mysterious about this; in either case the three events are entirely symmetrical, they just happen to be dependent, and there's no reason why this dependence shouldn't result in all the conditional probabilities being different from their unconditional counterparts.