I am reading the book "Random graphs" by Béla Bollobás. In theorem 4.8, quoted below, he is acting like random variables are independent of each other. So could someone please clear me up why random variables are independent? Random variables are counting nonisomorphic strictly balanced subgraphs with the same average degree, of a random graph $G$, is being strictly balanced implies being independent or there is something else?
Theorem 4.8 Let $H_1, \dots, H_m$ be strictly balanced graphs. Suppose $H_i$ has $k_i$ vertices, $l_i \ge 2$ edges, and the automorphism group has $a_i$ elements. Denote by $X_i = X_i(G)$ the number of subgraphs of $G$ isomorphic to $H_i$. If the $H_i$'s have the same average degree, say $2k_1/l_1 = \dots = 2k_m/l_m = 2k/l$, $c$ is a positive constant and $p \sim cn^{-k/l}$, then $$(X_1, \dots, X_m) \xrightarrow{d} (P_{\lambda_1}, \dots, P_{\lambda_m}),$$ where $\lambda_i = c^{l_i}/a_i$, that is $$P(X_1=s_1,\dots,X_m=s_m) \to \prod_{i=1}^m e^{-\lambda_i} \lambda_i^{s_i}/s_i!$$ for all $s_1 \ge 0, \dots, s_m \ge 0$.
Say that $X_1, X_2, \dots, X_m$ are counting the number of subgraphs isomorphic to $H_1, H_2, \dots, H_m$, which are strictly balanced graphs with the same average degree $d$.
The claim is not that $X_1, X_2, \dots, X_m$ are independent: they're not. The claim is that they're asymptotically independent: we have (in the random graph $G_{n,p}$ with $p = \Theta(n^{-d/2})$) $$ \lim_{n \to \infty} \frac{\Pr\left[\bigwedge_{i=1}^m (X_i{=}x_i)\right]}{\prod_{i=1}^m \Pr[X_i{=}x_i]} = 1. $$ In other words, if you're looking for the probability of an event of the form $\bigwedge_{i=1}^m (X_i{=}x_i)$ (that there are $x_1$ copies of $H_1$, $x_2$ copies of $H_2$, and so on in the random graph) then if you assume that the numbers of copies of the different graphs are independent, you will be off from the true probability, but the relative error will approach $1$ as the number of vertices approaches to infinity.
Intuitively, this independence is achieved because, when we look at all the ways that $x_i$ copies of $H_i$ for $i=1,\dots,m$ can appear in $G_{n,p}$, the cases in which some of those copies overlap don't make a significant contribution. So we are approximately correct if we look at each configuration in which none of the copies overlap, and see if that appears in the graph - and in such configurations, whether or not one copy actually appears in $G_{n,p}$ actually is independent from the other copies.
This sort of analysis can be done for any kind of graph. In general, the relationship between overlapping and non-overlapping configurations is that:
In a strictly balanced graph, each subgraph has strictly smaller average degree, which means that the benefit of overlapping configurations (you only need to count the edges in the overlap between two graph copies once) is outweighed by the disadvantage (you only get to pick the vertices in the overlap between two graph copies once). This allows us to ignore overlapping configurations and still get an asymptotically correct result.