The double dixie cup problem (also called generalized coupon collector problem) is a well-known problem where there are $n$ different coupons sold one by one and where a child wants to collect at least $m$ copies of each coupon -- the coupon collector problem usually refers to the $m=1$ case.
Newman and Shepp proved that the number of coupons $T_n$ needed in order get $m$ collections verifies $$\mathbb E[T_n] = n \log(n) + (m-1) n \log(\log(n)) + C_m n + o(n).$$
Erdös and Renyi then proved that $$\mathbb{P}(T_n - n \log(n) - (m-1) n \log(\log(n)) \le x) \to_{n \to +\infty}\exp(-e^{-x}/(m-1)!)$$ i.e. a convergence in distribution of $X_n = T_n/n - \log(n) - (m-1) \log(\log(n))$ towards a Gumbel distribution. They claim then that $C_m = \gamma - \log((m-1)!)$ and their argument is ``it is easy to show that in the present case the limit of the mean value is equal to the mean value of the limiting distribution''.
Of course convergence in distribution does not imply convergence of the mean (we typically need uniform integrability). Here, it seems to be easy to control uniformly the right tails of the $X_n$s but I cannot see how to proceed for the left tails.
Any idea regarding what Erdös and Renyi had in mind?