Help on application of Marton's transportation method (Bucheron-Lugosi-Massart)

82 Views Asked by At

I was trying to apply Marton's transportation inequality in the following exercise from Bucheron, Lugosi, Massart's text on concentration inequalities:

Exercise 8.1. Use Marton's transportation inequality to show that if $P$ is a product probability measure on $\mathcal{X}^n$, then for any $A, B \subset \mathcal{X}^n$ measurable, $$ d_H(A, B) \leq \sqrt{\frac{n}{2} \log \frac{1}{P(A)}} + \sqrt{\frac{n}{2} \log \frac{1}{P(B)}}. $$ Above, $d_H(A, B) = \min_{x\in A, y \in B} \sum_{i=1}^n \mathbf{1}_{x_i \neq y_i}$ is the Hamming distance between $A$ and $B$.

Any hints at all would be appreciated. The only thing I've gotten is that by Cauchy-Schwarz, we can weaken the version of Marton's inequality given in the text to $$ \min_{\mathbf{P} \in \mathcal{P}(P, Q)} \sum_{i=1}^n \mathbf{P}(X_i \neq Y_i) \leq \sqrt{\frac{n}{2} D(Q \| P)}, $$ where $\mathcal{P}(P, Q)$ is, as stated in the text, couplings of $P$ with $Q \ll P$. So you may hope to find $Q \ll P$ such that the $\sqrt{D(Q \| P)} \leq \sqrt{\log \frac{1}{P(A)}} + \sqrt{\log \frac{1}{P(B)}}$, and then take an expectation. That's just a guess, though. I hadn't made much progress trying to construct such $Q$.

1

There are 1 best solutions below

1
On BEST ANSWER

When applying Marton's inequality you have not mentioned how to link $d_H(A,B)$ to $\min \sum_i P(X_i \ne Y_i)$. In particular, $X$ must come from the $P$-measure and $Y$ must come from the $Q$-measure (or vice versa).


It seems we can WLOG assume $A$ and $B$ have positive $P$-measure, else the right-hand side is undefined.

Let $Z \sim P$. Let $X \sim Q_A$ and $Y \sim Q_B$ where $Q_A$ and $Q_B$ are supported on $A \cap \text{support}(P)$ and $B \cap \text{support}(P)$ respectively.

The following inequalities hold almost surely. $$\min_{x \in A, y \in B} \sum_{i=1}^n 1_{x_i \ne y_i} \le \min_{x \in A, y \in B} \left(\sum_{i=1}^n 1_{x_i \ne Z_i} + \sum_{i=1}^n 1_{Z_i \ne y_i}\right) \le \sum_{i=1}^n 1_{X_i \ne Z_i} + \sum_{i=1}^n 1_{Z_i \ne Y_i}.$$ Taking expectations of both sides yields $$d_H(A,B) \le \sum_{i=1}^n P(Z_i \ne X_i) + \sum_{i=1}^n P(Z_i \ne Y_i).$$ Applying your relaxation of Marton's inequality twice yields $$d_H(A,B) \le \sqrt{\frac{n}{2} D(Q_A\|P)} + \sqrt{\frac{n}{2} D(Q_B\|P)}.$$

If $Q_A(E) = P(E \cap A) / P(A)$ then $$D(Q_A \| P) = \underset{X \sim Q_A}{E} \log \frac{Q_A(X)}{P(X)} = \log \frac{1}{P(A)}.$$ Define $Q_B$ similarly.