Isoperimetric constant on random graph

113 Views Asked by At

I have the following problem.

Show that there is a constant $c=c(p) > 0 $ such that almost all graphs in $\mathcal{G}_{n,p}$ verify the following property : for each subset $X \in V(G)$ with cardinality $|X|\leq n/2$, $$ e(X, V\backslash X) \geq c |X| $$

Where $ e(X, V\backslash X)$ denotes the number of edges connecting $X$ and $V\backslash X$.

I found a "standard" solution, using combinatorial arguments (the constant $c(p)=p$ satisfies the condition), but I have the feeling that a "more elegant" proof should exist.

My intuition is as follow : the problem is equivalent to proving, with $i(G)$ the isoperimetric constant : $$ \forall p, \ \exists c(p)>0, \ s.t. \ \mathbb{P}[ i(G) > c] \rightarrow 1 \text{ when }n\rightarrow \infty $$

And for any graph $G$, we know a lower bound for $i(G)$: $$ \mu_2 /2 \leq i(G)$$ With $\mu_2$ being the second smallest eigenvalue of the Laplacian matrix of $G$. Now if I can find some bound for $\mu_2$ I should be able to do something.

I know that for fixed $p$ the probability that $G$ is connected tends to 1, hence $\mu_2 > 0$. But can we find similar argument for $\mu_2 > c$ for some constant $c$ ? in term of connectivity surely? Thanks for the help!

1

There are 1 best solutions below

0
On BEST ANSWER

From this paper we can estimate the normalized Laplacian spectrum of $\mathcal G_{n,p}$ from the spectrum of an "average-case Laplacian" $$\bar{L} = I - \bar{D}^{-1/2} \bar{A}\bar{D}^{-1/2}$$ where $\bar{D} = (n-1)pI$ is the expected value of $D$, and $\bar{A} = p(J-I)$ is the expected value of $A$. (As usual, $J$ is the all-ones matrix.) Since $\bar{D}$ commutes with everything, we can rewrite this as $I - \bar{D}^{-1}\bar{A}$ which simplifies to$$\frac{n}{n-1}I - \frac{1}{n-1}J.$$ The eigenvalues of $J$ are $n,0,0,\dots,0,0$ and so the eigenvalues of $\bar{L}$ are $0, \frac{n}{n-1}, \frac{n}{n-1}, \dots, \frac{n}{n-1}$.

This looks like a rather stupid estimate that doesn't depend on $p$, but the dependence on $p$ is hidden in the error bound. For all constant $\epsilon>0$, we have $$|\lambda_i(L) - \lambda_i(\bar{L})| \le \mathcal O\left(\sqrt{\frac{\log(n/\epsilon)}{(n-1)p}}\right)$$ provided $(n-1)p > k(\epsilon)\log n$. In other words, as long as $p$ is not a too-small function of $n$, we have $\lambda_2(L) \ge 1 - o(1)$ with high probability.

I guess Theorem 4 in the paper provides a slightly-tighter answer more easily, but then this answer would have been too short. (Also I didn't find Theorem 4 until I wrote all of the above.)