Greedy coloring of uniform hyperhraphs

72 Views Asked by At

I'm reading this paper and I have a (simple) question about the way the writer uses one of its definitions.

Definition 1: We say that $A \in E$ precedes $B \in E$ if the last vertex of $A$ becomes red because it was the first element of $B$.

(The coloring is not important)

The problematic definition:

Definition 2: If $A_1,...,A_n$ are events of a probability space, then a dependence graph $G = (V,E)$ of these events is a graph having the following properties: $V = \{1,....,n\}$, and each event $A_i$ is mutually independent of the events $\{Aj : (i; j)\}$. Let $degG(v)$ be a degree of a vertex $v$ in $G$.

The theorem he tried to prove:

Theorem 4: Let $H$ = $(V,E)$ be an $n$-uniform hypergraph in which each edge meets at most $D$ other edges. If $\frac{2e(2D^2-D)((n-1)!)^2}{(2n-1)} \leq 1$ then H is 2-colorable.

The proof:

Let us consider the uniform random orders of $V$. For $A,B \in E$ let $A_{AB}$ be the bad event that either $A$ precedes $B$ or $B$ precedes $A$. Clearly, the event $A_{AB}$ is mutually independent of all the other events $A_{RS}$ when $(A \cup B) \cap (R \cup S) = \emptyset$. ........(not important).....

My question:

In definition 2, he defines when 2 events are dependent when the indexes of those events are the vertex in a "normal" graph (not a hypergraph). now he talks about 2 events that are clearly (???) dependent when those events are symboled by 2 edges ($A_{AB}$ and $A_{RS}$). How are dependencies between 2 events here defined?

1

There are 1 best solutions below

0
On BEST ANSWER

Definition 2 is not a definition of when events are or are not dependent. Rather, it is a definition of a dependence graph in terms of which events are independent. This definition is later used when we apply the Lovász Local Lemma, because that lemma is stated in terms of a dependence graph between the events.

Dependency between events is determined by the probability space. Here, we are picking a uniformly random order of $V$, and $\mathcal A_{AB}$, for a pair $(A,B)$ with $A \cap B = \{x\}$, is the event that $x$ happens to be the first element of one set ($A$ or $B$) and the last element of the other ($B$ or $A$) in this random order.

Let me explain in detail the argument summarized by the one word "clearly".

The reason that $\mathcal A_{AB}$ is mutually independent of all the events $\mathcal A_{RS}$ with $(A \cup B) \cap (R \cup S) = \varnothing$ is that we may choose the random order of $V$ as follows:

  1. Choose the positions of the $|A\cup B|$ elements of $A \cup B$, without choosing their relative order.
  2. Choose the positions of each element outside $A\cup B$.
  3. Choose the relative order of the elements of $A \cup B$.

Steps 2 and 3 are independent, the outcomes of events $\mathcal A_{RS}$ with $R \cup S$ disjoint from $A \cup B$ are completely determined by step 2, and the outcome of $\mathcal A_{AB}$ is completely determined by step 3. This proves the mutual independence.

Formally, we are proving the statement that if we condition on the outcomes of any number of events $\mathcal A_{RS}$ with $R \cup S$ disjoint from $A \cup B$, this does not change the probability of $\mathcal A_{AB}$.