How to determine the weights in the asymmetric Lovasz Local Lemma

405 Views Asked by At

The application of the asymmetric Lovasz Local Lemma requires finding a weight function $x$ on the bad events satisfying the property in the link. Often, one uses a constant weight function, giving rise to symmetric LLL. Suppose this fails. Is there a systematic way to search for a better choice of $x$?

1

There are 1 best solutions below

0
On BEST ANSWER

In practice, it is not really done systematically. You just recognize a pattern that you've seen in previous applications of the local lemma, or else be very clever. Two patterns that I know:

  1. If you only have two types of events, you have two values $x_1$ and $x_2$, and you just solve an optimization problem to figure out which values give you the best bounds. Example: Ramsey numbers $R(3,k)$ or $R(4,k)$.
  2. If your bad events correspond to something happening on sets of varying size, let $x(A)$ be exponentially decaying in the size of the set that determines $A$. Again, solve an optimization problem to determine the constant of the exponential. Example: acyclic edge coloring, nonrepetitive coloring.

In addition, there's also a "compromise local lemma" that's asymmetric but doesn't require you to choose weights. It says that we can avoid bad events with positive probability if, for each event $A$, $$\sum_{B \in \Gamma(A)} \Pr[B] \le \frac14.$$ We get this by choosing $x(A) = 2 \Pr[A]$ in the usual version of the LLL. The symmetric version is pretty clearly a special case.


The entropy compression approach to the local lemma may give some intuition for what's going on with the weights. Let me summarize the idea (you can read more about it here); I apologize for the length of the digression before we can get back to answering your question.

Say that we're trying to find a function $f : X \to Y$ satisfying conditions of the form "on a subset $S \subseteq X$, $f$ doesn't do this highly specific thing". It's traditional to think of this as a $Y$-coloring of the set $X$.

Our algorithm will be as follows. Start with $X$ completely uncolored. Repeat the same step over and over:

  1. Pick the first element of $X$ that doesn't have a color, and color it uniformly at random.
  2. If a constraint on some set $S$ is violated, uncolor all of $S$. (In some cases, we can optimize by only uncoloring most of $S$; e.g., if the condition on $S$ is "can't be monochromatic", uncolor all but one vertex.)
  3. Write down what happened in this step, giving sufficient detail that the step can be reversed (and the color randomly chosen on this step reconstructed).

After $t$ steps, either $X$ has been colored, or we've turned a sequence of colors from $Y^t$ into a partial coloring of $X$ plus a record of what happened on each step. Reversibility guarantees that two different inputs (elements of $Y^t$) result in two different outputs.

If the record can be made terse enough that its length grows as $C^t$ for $C < |Y|$, then there is a $t$ such that, by the pigeonhole principle, there are fewer than $|Y|^t$ possible partial colorings plus records; in that case, it must be possible for the algorithm to terminate - for $X$ to end up entirely colored.


The entropy compression method is (more or less) equivalent to the local lemma. Instead of choosing values $x(A)$, we're choosing ways to encode the description "we uncolored the set $S$ determining $A$; here's how to undo that". These roughly correspond to each other in the sense that if we take $b(A)$ bits to encode this description, then probably something like $x(A) = 2^{-b(A)}$ will work as a weight in the local lemma.

(Intuitively, this is because $x(A)$ is an upper bound for the conditional probability of $A$ happening given that no other bad events happened; $\log_2 \frac{1}{x(A)}$ measures the information content of an event with probability $x(A)$, so there should be an encoding to match.)

The encoding of the description in the entropy compression algorithm is still not something we can really find systematically: it's as hard as finding any other good lossless data compression scheme. But you have some idea of how much you can afford to spend on encoding an event based on how complicated it is - so we're less stumbling in the dark here.