Equation (37) on pg. 226 of "Birthday paradox, coupon collectors, caching algorithms and self-organizing search" by Flajolet, Gardier and Thimonier reads: $$ E\{C_m\} = \int_0^{\infty}(1-\Theta_m(t)) dt, \quad \mathrm{where} \quad \Theta_m(t)=\prod_{i=1}^m (1-e^{-p_i t}). $$ In fact they omit the subscript $m$ from $\Theta_m$, but as stackexchange user kimchi lover points out, $\Theta$ does indeed depend on $m$. Here $m$ is a positive integer, and for all $i\in\{1, \ldots, m\}$, $p_i=\frac{1}{H_m i}$, where $H_m$ is the $m$-th harmonic number, $\sum_{k=1}^m \frac{1}{k}$. Thus $\{p_1, \ldots, p_m\}$ is a Zipf distribution.
The authors write (my subscript $m$)
"It can be proved that $\Theta_m(t)$ has a sharp transition from $0$ to $1$ for $t$ around $m \log^2 m$. More precisely, quantity \begin{equation} F_m(x)=-\log \Theta_m(xm \log m H_m) \end{equation} is such that for fixed $x$ as $m \to \infty$, we have: $F_m(x) \to \infty$ if $x<1$ and $F_m(x) \to 0$ if $x \geq 1$.''
I can prove the latter fact (if I replace $x\geq1$ by $x>1$, which I think is enough for the intended purpose - see below), and stackexchange user kimchi lover has kindly provided a proof of the former fact, here:
https://math.stackexchange.com/a/3577487/159855
I assume that by $xm \log m H_m$ the authors mean $xm(\log m)H_m$, and this is confirmed by kimchi lover's proof. As a corollary, the authors state that \begin{equation} E(C_m) \sim m \log^2 m, \tag{1} \end{equation} where by $\log^2 m$, I assume they mean $(\log m)^2$. Intuitively this makes sense (since $m(\log m)H_m$ is the area of a rectangle of width $m(\log m)H_m$ and height $1$, and since $H_m \sim \log m$), but let's try to prove it: \begin{align} \frac{1}{m(\log m)H_m} \int_0^{\infty}(1-\Theta_m(t)) dt &= \int_0^{\infty}(1-\Theta_m(xm(\log m)H_m)) dx \\ &= \int_0^{\infty}(1-e^{-F_m(x)}) dx. \end{align} Let $\epsilon \in (0,1)$. Then since the integrand is between $0$ and $1$ and is decreasing in $x$, we have \begin{align} 1 \geq \int_0^1 (1-e^{-F_m(x)}) dx & \geq \int_0^{1-\epsilon}(1-e^{-F_m(x)}) dx \\ & \geq \int_0^{1-\epsilon}(1-e^{-F_m(1-\epsilon)}) dx \\ &= (1-\epsilon)(1-e^{-F_m(1-\epsilon)}) \\ &> (1-\epsilon)^2 \end{align} for sufficiently large $m$. Therefore \begin{equation} (1-\epsilon)^2 + \int_1^{\infty}(1-e^{-F_m(x)}) dx \leq \frac{E(C_m)}{m(\log m)H_m} \leq 1 + \int_1^{\infty}(1-e^{-F_m(x)}) dx \end{equation} provided $m$ is large enough. If $\int_1^{\infty}(1-e^{-F_m(x)}) dx$ were in $o(m)$ as $m \to \infty$, then since $\epsilon>0$ is arbitrary, (1) would follow. Can we use the fact that, for $x>1$, $F_m(x) \to 0$ as $m \to \infty$ to prove that $\int_1^{\infty}(1-e^{-F_m(x)}) dx$ is indeed $o(m)$ as $m \to \infty$ ?
EDIT:
I should have asked for the stronger condition, that $\int_1^{\infty}(1-e^{-F_m(x)}) dx$ is in $o(1)$ (not only in $o(m)$), which does indeed hold, as kimchi lover has shown.
It suffices to show, as $m\to\infty$, that the ratio $$\frac{\int_{m(\log m)^2}^\infty(1-\Theta_m(t))\,dt}{m(\log m)^2}\to 0.\tag1$$ This convergence follows from the inclusion-exclusion or Bonferonni estimate $$1-\Theta_m(t)\le\sum_{i=1}^m e^{-p_it}$$ which can be explicity integrated to yield the estimate $$ \int_{m(\log m)^2}^\infty(1-\Theta_m(t))\,dt\le \sum_{i=1}^m \frac {e^{-p_im(\log m)^2 }}{p_i}$$ $$= \sum_{i=1}^m i H_m \exp\left(-\frac {m(\log m)^2}{iH_m} \right)\le m^2 H_m \exp\left(-\frac {(\log m)^2}{H_m} \right)\tag2. $$ Under the fiction that $H_m=\log m$ it is easy to check (1), since the exponential on the right side of (2) evaluates to $1/m$. A correct argument, based on $H_m=\log m +\gamma +o(1)$, gives the same result: the exponential factor in (2) is $\sim c/m$ for some non-zero constant $c$, and on division by $m(\log m)^2$, the ratio in (1) tends to $0$.
This Flajolet et al. paper is a strenuous read!