Setup:
A probability space $(\Omega, \mathcal A, P)$ is a set $\Omega$ together with a sigma-algebra $\mathcal A$ defined on that set and a probability measure $P$ defined on that sigma-algebra. Consider the following interpretation:
A probability space consists of a set $\Omega$ of possible states of the world and collection $\mathcal A$ of subsets of $\Omega$ (events) about whose occurrence we have formed a consistent system of beliefs $P$.
In this interpretation, a random variable $X: (\Omega, \mathcal A) \to (\mathbb R, \mathcal B)$ can be understood as an experiment: If the true (unobservable) state of the world is $\omega$, the result of the experiment will be $X(\omega)$. Before the experiment occurs, our belief that the result of the experiment will lie in $B \in \mathcal B$ is $P(X \in B)=P(\{\omega: X(\omega) \in B\})$.
Question:
In the real world, if we conduct any two randomly chosen experiments somewhere in the universe then a-priori we would expect the associated random variables to be independent.
On the other hand, if we take two "randomly chosen" random variables $X, Y$ defined on the same probability space then independence appears to be a "knife-edge property": $$P(X \in B, Y \in B') = P(X \in B)P(Y \in B')$$ has to hold for all $B, B' \in \mathcal B$. So it appears that we should never expect two randomly chosen random variables to be independent.
How can the contradiction between these two intuitions be resolved? Also, what would be a "canonical" way of picking random variables at random?