Consider two sequences of random variables $(X_n)$ and $(Y_n)$ which converge in distribution to $X$ and $Y$ respectively, where $X$ and $Y$ are independent, but each pair $(X_n, Y_n)$ is not necessarily independent.
I am trying to understand what happens to the dependence of $X_n$ and $Y_n$ in the limit: Does the covariance have to converge to zero in order for $X$ and $Y$ to be independent? If yes would anyone be able to give a quick idea of a proof, or give a counterexample if not?
No, in general the covariance does not converge to $0$.
Just consider $([-1,1],\mathcal{B}([-1,1]))$ endowed with the Lebesgue measure and
$$X_n(\omega) := Y_n(\omega) := -n 1_{[-1/n,0)}(\omega) + n 1_{(0,1/n]}(\omega), \qquad \omega \in [-1,1].$$
Since $X_n \to X:=0$ almost surely and $Y_n \to Y := 0$ almost surely, we have in particular $X_n \to 0$ and $Y_n \to 0$ in distribution and the random variables $X=0$ and $Y=0$ are independent. On the other hand
$$\text{cov}(X_n,Y_n) = \text{var}(X_n) = \mathbb{E}(X_n^2) = 2n \to \infty$$
as $n \to \infty$.