For $n \in \mathbb{N}$, let $(X_1, \dots, X_n)$ and $(Y_1, \dots, Y_n)$ be iid. samples from the same distribution. I write $X_{k:n}$ the $k$-th order statistic out of a sample of size $n$. I am trying to prove that for all $d > 0$: $$ S_n = \frac{1}{n^d} \sum_{k=1}^n (X_{k:n} - Y_{k:n})^2 \underset{n \rightarrow +\infty}{\overset{\mathbb{P}}{\to}} 0 $$ which seems to me to be true, at least under reasonable regularity assumptions.
I know this is true for samples $(U_1, \dots, U_n)$ and $(V_1, \dots, V_n)$ that follow a uniform distribution over $[0,1]$. Indeed, I calculated that: $$ \mathbb{E}\left[\sum_{k=1}^n (U_{k:n} - V_{k:n})^2\right] = \frac{n}{3(n+1)} $$ and $$ \text{Var}\left[\sum_{k=1}^n (U_{k:n} - V_{k:n})^2\right] = \frac{n \left(4 n^2+44 n+45\right)}{45 (n+1)^2 (n+2)} $$ (The proofs simply involve calculating the relevant integrals using the explicit formula for the distribution of $U_{k:n}$ and $V_{k:n}$.) Then I apply Chebyshev's inequality. Letting $\varepsilon > 0$: \begin{align} \mathbb{P}\{|S_n|>\varepsilon\} &\leq \frac{\mathbb{E}[S_n^2]}{\varepsilon^2} \\ &\leq \frac{\text{Var}[n^dS_n]+\mathbb{E}[(n^dS_n)^2]}{n^{2d}\varepsilon^2}\\ &\leq \frac{1}{n^{2d}\varepsilon} \frac{n (n+5)}{5 (n+1) (n+2)}\\ &\overset{\mathbb{P}}{\to} 0 \end{align}
I ran simulations and they suggest that the result holds for other distributions, including distributions with infinite support (although finite variance seems required). I thought I could move from uniform to the general case by assuming a smooth quantile function $Q$ for $X$ and $Y$ and using the fact that $X=Q(U)$ and $Y=Q(V)$. After doing a Taylor expansion (with Lagrange remainder) of $X_{k:n}=Q(U_{k:n})$ and $Y_{k:n}=Q(V_{k:n})$ around $p_k = k/(n+1)$, I get that $$ (X_{k:n} - Y_{k:n})^2$$ is equal to $$\left[Q'(p_k)(U_{k:n}-V_{k:n}) + \frac{1}{2}Q''(p_k^U)(U_{k:n}-p_k)^2 + \frac{1}{2}Q''(p_k^V)(V_{k:n}-p_k)^2\right]^2 $$ where $p_k^U$ (resp. $p_k^V$) is between $p_k$ and $U_{k:n}$ (resp. $V_{k:n}$). But I see no good way of dealing with the remainder, or with the coefficient $Q'(p_k)^2$ which can go to infinity if the distribution has an unbounded support. This is especially problematic given that the term must be bounded uniformly in order to get the sum.
How would you prove convergence in the general (i.e. non-uniform) case?