Sequence of Erdos-Renyi random graphs convergent with probability 1

472 Views Asked by At

Definitions

Let $H,G$ be finite simple graphs. Then the density of $H$ in $G$, denoted $d(H,G)$, is defined as the probability that a randomly chosen $|H|$-tuple of vertices of $G$ induce a graph isomorphic to $H$. Note that $$d(H,G)=\frac{\operatorname{ind}(H,G)}{|G|(|G|-1)\cdots (|G|-|H|+1)},$$ where $\operatorname{ind}(H,G)$ is the number of embeddings of $H$ as induced subgraphs of $G$.

A sequence of graphs $(G_n)_{n=1}^\infty$ is convergent if for every finite simple graph $H$, the sequence $d(H,G_n)$ converges as $n\rightarrow\infty$.

Question

In the following article, http://research.microsoft.com/en-us/um/people/borgs/papers/teststoc.pdf, in Example 5 (Random graphs), the authors assert that the sequence of Erdos-Renyi random graphs $(G(n,p))_{n=1}^\infty$ is convergent with probability $1$ (with $p\in[0,1]$ fixed). They claim that "it is not hard (using high concentration results)...". I have some decent knowledge in graph theory, but I am not so comfortable in probability theory, so I haven't succeeded in proving the assertion.

My approach

What I have tried is the following. Let $H$ be a fixed graph, we denote $G_n=G(n,p)$ for some fixed $p\in[0,1]$. We want to prove that the sequence $d(H,G_n)$ is convergent with probability $1$. It is straightforward to compute the expected value $$\mathbb{E}d(H,G_n)=|\operatorname{Aut}(H)|p^{|E(H)|}(1-p)^{\binom{|H|}{2}-|E(H)|},$$ which is independent of $n$. Now, the idea is to show that the random variable $d(H,G_n)$ is highly concentrated around its expected value (as the authors suggest) -- so that we can use Borel-Cantelli lemma. In particular, if we show that $$\operatorname{Pr}[|d(H,G_n)-\mathbb{E}d(H,G_n)|>t_n]\leq q_n,$$ for some sequences $(t_n)_{n=1}^\infty$ and $(q_n)_{n=1}^\infty$ such that $t_n\rightarrow 0$ and $\sum_{n=1}^\infty q_n<\infty$, then we can apply Borel-Cantelli lemma on the sequence of events $E_1,E_2,\ldots$ where $E_n$ is the event $|d(H,G_n)-\mathbb{E}d(H,G_n)|>t_n$ to conclude that with probability $1$, only finitely many $E_n$ occur. This should directly imply that the sequence $d(H,G_n)$ is convergent, right?

The missing piece is the proof that the random variable $d(H,G_n)$ is highly concentrated around its expected value. I know about Chernoff bounds and Chebychev inequality, but it seems that some more involved bound is needed here.

1

There are 1 best solutions below

0
On BEST ANSWER

Fix $p \in (0, 1)$ and a graph $H$ with $\nu \geq 2$ vertices and $\mu$ edges.

Let us consider the number, $X$, of induced subgraphs of $H$ into $G(n,p)$. Now $$ X = \sum_{\sigma} 1_\sigma, $$ where $\sigma$ runs over all possible $H$ subgraphs of $K_n$ and $1_\sigma$ is the indicator function for whether the edges of $\sigma$ are present in $G(n,p)$. Note that $E[1_\sigma] = p^\mu (1-p)^{{\nu \choose 2}-\mu}$. Further, $$ E[X] = {n \choose \nu} |Aut(H)|p^\mu (1-p)^{{\nu \choose 2}-\mu}, $$ which is $\Theta\left(n^\nu\right)$.

Note that \begin{align*} \text{var}[X] &= \sum_{\sigma, \tau} \left(E[1_\sigma 1_\tau] - E[1_\sigma] E[1_\tau] \right) \\& \leq \sum_{\sigma} E[1_\sigma] + \sum_{\sigma \neq \tau} \left(E[1_\sigma 1_\tau] - E[1_\sigma] E[1_\tau] \right). \end{align*} For those $\sigma$ and $\tau$ that do not share at least two vertices, this latter sum is zero because $\sigma$ and $\tau$ look at only different independent potential edges. Thus \begin{align*} \text{var}[X] &\leq \sum_{\sigma} E[1_\sigma] + \sum_{\sigma \neq \tau, \atop |V(\sigma)\cap V(\tau)|\geq 2} \left(E[1_\sigma 1_\tau] - E[1_\sigma] E[1_\tau] \right) \\ & \leq E[X] + \sum_{\sigma \neq \tau, \atop |V(\sigma)\cap V(\tau)|\geq 2} E[1_\sigma 1_\tau]. \end{align*} This last sum is definitely $O(n^{\nu+\nu-2}) = O(n^{2\nu-2})$. Thus $$ \text{var}[X] = O(n^\nu) + O(n^{2\nu-2}) = O(n^{2\nu-2}). $$ For any $\epsilon_n$, Chebyshev's Inequality says that $$ P(|X-E[X]| > \epsilon_n E[X]) \leq \frac{\text{var}[X]}{\epsilon_n^2 E[X]^2} = O\left(\frac{1}{\epsilon^2 n^2}\right). $$

Choosing $\epsilon = n^{-1/4}$ would give that $$ \sum_{n=1}^{\infty} \frac{1}{\epsilon^2 n^2} $$ converges so that you can use the Borel-Cantelli Lemma as you wanted.