Consider two independent samples $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_m$ from two distribution functions $F$ and $G$. Let $F_n$ and $G_m$ be the corresponding empirical distribution functions. I am trying to understand the Donsker Theorem for the joint distribution of the empirical processes $(F_n-F)$ and $(G_m-G)$.
I am reading the book: Vaart, A. (1998). Asymptotic Statistics (Cambridge Series in Statistical and Probabilistic Mathematics). Cambridge: Cambridge University Press.
In this book it says that (p. 299):
Let $N=n+m$ and assume that $m/N\to\lambda\in(0,1)$. By the Donsker's Theorem and Slutsky's lemma: $\sqrt{N}(F_n-F,G_m-G)\leadsto\Big(\frac{\mathbb{G}_F}{\sqrt{\lambda}},\frac{\mathbb{G}_G}{\sqrt{1-\lambda}}\Big)$, where $\mathbb{G}_F$ and $\mathbb{G}_G$ are independent Brownian bridges.
Can someone please give me an intuitive explanation for this result and maybe recommend some references for this topic?
Thanks
I assume you know the 1 sample version of Donsker's theorem, where you have $X_1,\ldots,X_n$ i.i.d. and empirical distribution function $F_n$ and so on.
As an intermediate between that and the set-up you describe is with $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$, with $m=n$ in your setup. The independence of $F_n$ and $G_n$ means there is nothing going on between them that is not also present in them marginally. Each converges to a independent Brownian bridge.
Slightly more general than that is if $m$ and $n$ are not equal but are of comparable magnitude. That's what the $\lambda$ is about. If $m=2n$ you'd expect similar results to the $m=n$ case but with this & that detail changed.
Now if $m$ and $n$ are not of similar magnitude (say $m=n^2$, for instance) then things simplify: the one sample is in effect infinitely large, and the fluctuations in any sample statistic will come mostly from the smaller. But this is beyond the scope of the passage you asked about.