Show that, for $p = \frac{c}{n}, c > 1$ a constant, with high probability $\Delta(G_{n,p}) = (1+o(1))\frac{\text{log}n}{\text{loglog}n}$.

78 Views Asked by At

I’m reading this proof and the argument is not entirely clear to me. The argument is as follows. For $\epsilon > 0$, let $d_+ = (1 + \epsilon)\frac{\text{log}n}{\text{loglog}n}$ and $d_- = (1 - \epsilon)\frac{\text{log}n}{\text{loglog}n}$. Also let $X_d$ be a non-negative, integer random variable that counts the number of vertices in $G_{n,p}$ with degree $d$.

Then we show:

  • w.h.p., there is no vertices $v$ with $\text{deg}(v) \geq d_+$.
  • $\mathbb{E}(X_{d_-}) = \infty$.
  • $\mathbb{P}(X_{d_-} = 0) \to 1$.

How do these three points imply the conclusion in the statement? I understand what the first point says, but not what the two last points combined do.

1

There are 1 best solutions below

0
On BEST ANSWER

Note on notation: I'm going to switch to writing $\mathbf X$ instead of $\mathbf X_{d^-}$ just to avoid the nested subsuperscripts.

We don't want to prove $\Pr[\mathbf X = 0] \to 1$; we want to prove $\Pr[\mathbf X > 0] \to 1$ to conclude that $\Delta(\mathbf G_{n,p}) \sim \frac{\log n}{\log \log n}$ whp. This tells us, in other words, that for all $\epsilon>0$, whp there is a vertex of degree $(1-\epsilon)\frac{\log n}{\log \log n}$, so $\Delta(\mathbf G_{n,p}) \ge (1-\epsilon)\frac{\log n}{\log \log n}$ whp.

The correct form of this second moment inequality is $\Pr[\mathbf X > 0] \ge \frac{\mathbb E[\mathbf X]^2}{\mathbb E[\mathbf X^2]}$. Once we have $\mathbb E[\mathbf X] \to \infty$ and $\mathbb E[\mathbf X^2] = \mathbb E[\mathbf X] + (1+o(1)) \mathbb E[\mathbf X]^2$, we can rewrite the inequality as $\frac{\mathbb E[\mathbf X]^2}{\mathbb E[\mathbf X] + (1+o(1)) \mathbb E[\mathbf X]^2} = \frac{1}{1 + o(1) + 1/\mathbb E[\mathbf X]}$. Since $\mathbb E[\mathbf X] \to \infty$, $1/\mathbb E[\mathbf X]$ is also $o(1)$, so we conclude $\Pr[\mathbf X > 0] \ge \frac1{1+o(1)} = 1 - o(1)$.

Note on notation: $1+o(1)$ and $1-o(1)$ are of course synonymous, but I feel bad writing $1+o(1)$ for a probability.

Actually, under the same hypotheses, we can prove something stronger. We have $\operatorname{Var}[\mathbf X] = \mathbb E[\mathbf X^2] - \mathbb E[\mathbf X]^2 = o( \mathbb E[\mathbf X]^2)$, so $\operatorname{SD}[\mathbf X] = o(\mathbb E[\mathbf X])$, and by Chebyshev's inequality, we have $\mathbf X \sim \mathbb E[\mathbf X]$ whp. That is, for all $\epsilon>0$, whp $(1-\epsilon)\mathbb E[\mathbf X] \le \mathbf X \le (1+\epsilon)\mathbb E[\mathbf X]$. So not only is there at least one vertex of degree $d^-$, but the number of such vertices is concentrated at the expected number.