What is the difference between G(n,m) and G(n,p)?
In G(n,m) one picks m of the $n \choose2$ edges at random. In G(n,p) each of the $n \choose 2$ edges are present independently with probability p.
As Durrett puts it in his book, Random Graph Dynamics, in the first model there is a small and annoying amount of dependence caused by picking a fixed number of edges, so the second one is preferred.
Could anyone explain what is the dependence he refers to?
In $G(n,m)$, each edge $e$ has absolute probability $p=m{n\choose 2}^{-1}$ to be chosen. But, conditionally on some other edges being chosen, the probability that $e$ is chosen decreases. An extreme case is to condition on the event that $m$ other edges are chosen, then the conditional probability that $e$ is chosen drops to $0$.
Another take: in $G(n,m)$, for every edges $e\ne f$, $P[e\in G,f\in G]\lt P[e\in G]\cdot P[f\in G]=p^2$.
In $G(n,p)$, by contrast, for every collection $F$ of edges of size $n$, $P[F\subset G]=p^n$ hence the edges are independent. On the other hand, for every $p$ in $(0,1)$, the number of edges of $G$ is random, and can take every value between $0$ and ${n\choose 2}$.