Show that a random graph with edge probability $p=\frac{10\log(n)}{n}$ almost certainly has no isolated vertices...

1.1k Views Asked by At

I have this homework question: "Show that a ramdom graph with edge probability $p=\frac{10\log(n)}{n}$ almost certainly has no isolated vertices, while a random graph with edge probability $p=\frac{\log(n)}{10n}$ almost certainly has at least one isolated vertices."

For the first part of the problem, it seems trivial for small $n$, since $p=\frac{10\log(n)}{n}\ge1,$ so I guess we might as well assume $n$ is large enough to get $p<1$. Now, I would like to say that the probability any particular vertex is isolated is $$\left[1-\frac{10\log(n)}{n}\right]^{n-1},$$ since there is a $1-\frac{10\log(n)}{n}$ chance that a specific vertex fails to connect to another, and these $n-1$ events are independent. Is this logic correct? Furthermore, the expected number of isolated vertices should be $$n\left[1-\frac{10\log(n)}{n}\right]^{n-1},$$ as we are just summing the $n$ possible vertices times the probability that that vertex is isolated right? If so, then the expected value goes to zero as $n\to\infty$, which goes to show that the probability that there is at least one isolated vertex is $0$ by Markov's inequality Theorem.

On the other hand, for $p=\frac{\log(n)}{10n}$, we get that the expected value $$n\left[1-\frac{\log(n)}{10n}\right]^{n-1}$$ blows up as $n$ gets big, is this enough to say that there is almost certainly an isolated vertex in the graph?

Lastly, does anyone know a good beginner's resource for learning how to use the probabilistic method for graph theory exercises? I am struggling to gain intuition on solving these. Most of the info I find online tends to assume the reader is not an idiot, which I am. Thanks in advance!

1

There are 1 best solutions below

1
On BEST ANSWER

The events "vertex $v$ is isolated" are not independent: if vertex $v_1$ isolated, then in particular it has no edge to $v_2$, making it a tiny bit more likely that vertex $v_2$ will be isolated.

That being said, you don't need independence to compute the expectation: it's still true that the expected number of isolated vertices is $n(1-p)^{n-1}$, even though the events are not independent.

You're correct that this expected value goes to $0$ in the first case, which shows that almost certainly no vertex is isolated...

...and you're also correct that this expected value diverges as $n \to \infty$ in the second case, but this does not imply that there are almost certainly isolated vertices. If we compute more precisely, the expected value is $(1+o(1))n^{0.9}$, but this sort of thing can happen for many reasons:

  • It could (and in fact does) reflect a situation in which there are $n^{0.9}$ isolated vertices almost all of the time.
  • But you haven't ruled out something like "with probability $n^{-0.1}$, there are $(1+o(1))n$ isolated vertices; the rest of the time, there are none".

That last one is an extreme example, but the point is that a large expected number of isolated vertices does not imply that isolated vertices almost always exist.

We deal with this in a way that's typical for calculations in random graphs:

  • To show that a random variable $X$ is almost always $0$, it's enough to show $\mathbb E[X] \to 0$ (that's your first case).
  • To show that a random variable $X$ is almost always large in the case where $\mathbb E[X]\to \infty$, show that $\text{Var}[X] = o(\mathbb E[X]^2)$ and use Chebyshev's inequality in the form $$\Pr[X=0] \le \frac{\text{Var}[X]}{\mathbb E[X]^2}.$$

In the second case of your problem, you should get $\text{Var}[X] \sim n^{0.9}$, which is much smaller than $E[X]^2 \sim n^{1.8}$, so everything goes through.