To make sure that we are all on the same page, I will state the following two definitions:
Definition A: Two families of random variables $(X_a)_{a\in A}$ and $(X_b)_{b\in B}$ are said to be independent if and only if the two sigma fields $\sigma(\{X_a:a\in A\})$ and $\sigma(\{X_b:b\in B\})$ are independent. Ref.
Definition B: Consider any undirected, simple graph $G=(V,E)$, $V$ are the vertices and $E$ the edges, and a family of random variables $(X_a)_{a\in V}$. Then $G$ is a dependency graph for the $X_a$ if the following is true: For any disjoint subsets $I,J\subset V$ such that there is no edge between any vertex of $I$ and any vertex of $J$ (i.e. for all $i\in I$ and $j\in J$ we have $\{i,j\}\not\in E$), we have that $(X_i)_{i\in I}$ and $(X_j)_{j\in J}$ are independent families of random variables.
Example: If you consider the graph $G$ with vertices $V=\{1,2,3\}$ and no edges ($E=\emptyset$), then it is a dependency graph for $(X_1,X_2,X_3)$ if and only if the random variables are independent.
My question: Suppose I am given a finite, undirected, simple graph $G=(V,E)$. I want to construct a family of random variables $(X_a)_{a\in V}$ such that $G$ is a dependency graph for the $X_a$. Note that there are many such families of random variables (for example if the $X_a$ are an independent family then $G$ automatically is a dependency graph for them), so I would like to have one where we have "as few independencies as possible". I.e. if there is an edge between $i,j\in V$, then I would like $X_i$ and $X_j$ to not be independent. Is there any known result on how to get such random variables? Is it known if such random variables always have to exist? (Seeing for instance exercise 18 from here, it seems such random variables should always exist.)
Cross-posted on Mathoverflow: https://mathoverflow.net/questions/363417/.
The natural place to go is a dependent percolation model. Pick $p\in(0,1)$ and consider the probability measure $\mu$ on $\{0,1\}^E$ given by the product measure of $Ber(p)$ on each coordinate, i.e. $$ \mu(\{f\})=p^{|\{e|f(e)=1\}|}(1-p)^{|\{e|f(e)=0\}|} $$ You could potentially pick a non-constant probability vector if you want.
Then, we define the random variables $X_v$ on $\{0,1\}^E$ given by $X_v(f)=\prod_{e=(v,w)\in E} f(e)$, i.e. we set $X_v$ equal to $1$ if $f(e)=1$ for each edge $e$ adjacent to $v$ and $0$ otherwise.
Then, by construction, if $E_1$ and $E_2$ are two disjoint sets of edges, we have that $\{f(e)|e\in E_1\}$ and $\{f(e)|E\in E_2\}$ are independent. Furthermore, we have $$ \sigma(X_v)_{v\in I}\subset\sigma( \{f(e)|e\in E_I\}), $$ where $E_I$ is the set of edges with at least one end-point in $I$. Hence, for two sets of vertices $I$ and $J$, we get that if $E_I\cap E_J=\emptyset,$ then $(X_v)_{v\in I}$ is independent of $(X_w)_{w\in J}$.
On the other hand, if $X_v$ and $X_w$ are neighbours such that $X_w$ has degree $deg(w)$, we have $\mu(X_w=1|X_v=1)= p^{deg(w)-1}\neq p^{deg(w)}=\mu(X_w=1)$. Hence, $X_v$ and $X_w$ are not independent, and your minimality requirement is satisfied.