I had previously asked for help in clarification of use of Chebyschevs inequality in relation to a proof of the number of edges in the unique giant component $C_0$ in an uniform random graph. Thanks to @misha-lavrov for his insightful answer here:
Incorrect proof: Number of edges in the unique giant component in an uniform random graph
I have another problem in relation to the proving the following theorem:
Let $c>1$. If $n$ is the number of vertices and $m = cn/2$ is the number edges of a uniform random graph $G_{n,m}$ , then with high probability (w.h.p.) $G_{n,m}$ consists of a unique giant component $C_0$ with asymptotically $(1-x/c)n$ vertices and $(1-(x/c)^2)m$ edges.
Here $0<x<1$ is the unique solution to $x\exp(-x)=c\exp(-c)$. This is from Frieze's and Karonski's book (page 34-38 here https://www.math.cmu.edu/~af1p/BOOK.pdf).
I have managed to understand and fill out the details for existence and uniqueness of the the giant component $C_0$ as well as the number of vertices. The only part that is left is computing the number of edges.
The idea in Frieze's and Karonski's proof is first showing that $(1-(x/c)^2)m$ is asymptotically the expected number of edges in $C_0$ and then showing that the number of edges $X$ concentrates in the following sense: For every $\epsilon>0$ $$ P((1-\epsilon)(1-(x/c)^2)m \leq X \leq (1+\epsilon)(1-(x/c)^2)m) = P((1-\epsilon)E[X]\leq X \leq (1+\epsilon)E[X]) \rightarrow 1 $$ as $n \to \infty$. This is equivalent to showing $$ P(\vert X - E[X] \vert > \epsilon E[X]) \rightarrow 0 $$ and this we can use Chebyschevs inequality on so we need to compute the variance of $X$. As $$ X=\sum_{j=1}^m 1_{\{e_j \in C_0\}}, $$ then \begin{align*} E[X^2] &= \sum_{j=1}^m P(e_j \in C_0) + \sum_{j \neq i} P(e_j \in C_0, e_i \in C_0) \\ &= m (1-(x/c)^2)+\sum_{j \neq i} P(e_j \in C_0, e_i \in C_0). \end{align*}
Frieze and Karonski now claim that $$ P(e_j \in C_0, e_i \in C_0)=(1+o(1))P(e_i \in C_0)P(e_j \in C_0) $$ where $o(1)$ denotes a sequence converging to 0.
This is the part I need help with!
(Note if would be fine to show $\leq$ instead of $=$).
My first instinct was that writing $P(e_j \in C_0, e_i \in C_0)=P(e_j \in C_0 \mid e_i \in C_0)P(e_i \in C_0)$ brings us "halfway there" but I think it hard to work with this conditional probability so maybe this not a viable approach.
If one follows the proof in Frieze's and Karonski's book, the idea is to introduce a subgraph $G_2$ which is exactly the original graph but without the two edges $e_i,e_j$. Then we can consider the unique giant component here $C_2 \subseteq C_0$.
My attempt at the calculation is then the following (here $e_i \in C_0$ means $e_i$ is an edge of the the component while $e_j \cap C_0 \neq \emptyset$ means that they share at least one vertex): \begin{align*} \ & P(e_i \in C_0, e_j \in C_0) \\[0.6em] &= P(\{e_j \in C_0, e_i \in C_0\} \cap (\{e_i \cap C_2 \neq \emptyset , e_j \cap C_2 \neq \emptyset\} \cup \{e_i \cap C_2 \neq \emptyset , e_j \cap C_2 \neq \emptyset\}^c ) ) \\[0.6em] & = P(e_j \in C_0, e_i \in C_0,e_i \cap C_2 \neq \emptyset , e_j \cap C_2 \neq \emptyset) \\[0.6em] & \quad+P(\{e_j \in C_0, e_i \in C_0\}\cap\{e_i \cap C_2 \neq \emptyset\} \cup \{ e_j \cap C_2 \neq \emptyset\}) \\[0.6em] &\leq P(e_i \cap C_2 \neq \emptyset , e_j \cap C_2 \neq \emptyset) \\[0.6em] &\quad+ P(e_j \in C_0, e_i \in C_0,e_i \cap C_2 =\emptyset) + P(e_j \in C_0, e_i \in C_0,e_j \cap C_2 = \emptyset) \\[0.6em] &\leq P(e_i \cap C_2 \neq \emptyset , e_j \cap C_2 \neq \emptyset) \\[0.6em] &\quad+ P(e_i \cap (C_0\setminus C_2) \neq \emptyset) + P(e_j \cap (C_0\setminus C_2) \neq \emptyset) \\[0.6em] &\leq P(e_i \cap C_2 \neq \emptyset , e_j \cap C_2 \neq \emptyset) + 2 \underbrace{\frac{Klog(n)\choose 2}{n \choose 2}}_{\to 0 \text{ as } n \to \infty} \\[0.6em] & \approx P(e_i \cap C_2 \neq \emptyset , e_j \cap C_2 \neq \emptyset)\\[0.6em] & = P(e_i \cap C_2 \neq \emptyset , e_j \cap C_2 \neq \emptyset, e_j \cap e_i \neq \emptyset)+P(e_i \cap C_2 \neq \emptyset , e_j \cap C_2 \neq \emptyset, e_j \cap e_i = \emptyset)\\[0.6em] & = P(e_i \cap C_2 \neq \emptyset , e_j \cap C_2 \neq \emptyset, e_j \cap e_i \neq \emptyset) \\[0.6em] & \quad +P(e_i \cap C_2 \neq \emptyset, e_j \cap e_i = \emptyset \mid e_j \cap C_2 \neq \emptyset)P(e_j \cap C_2 \neq \emptyset)\\[0.6em] \end{align*} In the fifth step we used that earlier in the proof it was shown that all other small componenets are of order at most $K log(n)$ where $K>0$ is some constant. But now I cannot get any further. Can anyone help me finish the computation?
Update: I have made some progress (?). I have shown that $$ P(e_i \cap C_2 \neq \emptyset , e_j \cap C_2 \neq \emptyset, e_j \cap e_i \neq \emptyset) \leq P(e_j \cap e_i \neq \emptyset) = \frac{2(n-2)}{{n \choose 2} - 1} \to 0 $$ as $n \to \infty$. Furthermore from earlier in the proof it was shown that $P(e_j \in C_0)=P(e_j \cap C_1 = \emptyset) \approx (x/c)^2$ (where $C_1$ denotes the giant without just one of the edges and $x$ and $c$ are constants). By the same reasoning (although I am not completely sure..?) as there one would get that $P(e_j \cap C_2 = \emptyset) \approx (x/c)^2$. Hence we have $$ P(e_i \in C_0, e_j \in C_0) = (o(1) + P(e_i \cap C_2 \neq \emptyset, e_j \cap e_i = \emptyset \mid e_j \cap C_2 \neq \emptyset) )(1-(\frac{x}{c})^2). $$ Although this is not exactly what the book want, it would suffice to show that $$ P(e_i \cap C_2 \neq \emptyset, e_j \cap e_i = \emptyset \mid e_j \cap C_2 \neq \emptyset) \approx P(e_i \cap C_2 \neq \emptyset). $$ It kinda makes sense that as we intersection with the vertices of $e_i$ and $e_j$ being disjoint, the conditioning doesn't really matter and asymptotically we have "independence". This is very hand-wavy however and I would like a better argument.
Can someone help me finish the computation and/or improve it?
Here is one solution: Using $P(A)=P(A\cap B)+P(A \cap B^c)$: \begin{align*} \ & P(e_i \cap C_2 \neq \emptyset, e_j \cap e_i = \emptyset \mid e_j \cap C_2 \neq \emptyset) \leq P(e_i \cap C_2 \neq \emptyset, e_j \cap e_i = \emptyset \mid \vert e_j \cap C_2 \vert = 1) \\[1.5em] & = P( \vert e_i \cap C_2 \vert =1, e_j \cap e_i = \emptyset \mid \vert e_j \cap C_2 \vert = 1) + P( \vert e_i \cap C_2 \vert =2, e_j \cap e_i = \emptyset \mid \vert e_j \cap C_2 \vert = 1) \\[1.5em] & \approx {2 \choose 1} \frac{n(1-x/c)}{n}\cdot \frac{n-n(1-x/c)}{n}+ \frac{n^2(1-x/c)^2}{n^2} = 1-(x/c)^2. \end{align*}