Suppose $X_1, \ldots, X_n$ is an a iid sample of size $n$ from one population, and $Y_1, \ldots, Y_m$ is an iid sample of size $m$ from another population, and $n \neq m.$ Assume that $\mathbb{E}(X_i) = \mu_X$ and $\mathbb{E}(Y_i) = \mu_Y.$
Suppose we estimate $\theta = \mu_X - \mu_Y$ with $\hat{\theta} = \overline{X} - \overline{Y}.$ I want to show that $\hat{\theta}$ is asymptotically normal, i.e., $$ \sqrt{\tilde{n}}(\hat{\theta}-\theta) \to N(\tilde{\mu},\tilde{\sigma}^2), $$ where the convergence is weak.
My first question: what is the sample size of the estimator $\hat{\theta}?$ I'm assuming it's $m + n.$
Attempt:
I want to show $$\underbrace{\sqrt{m+n}\Big((\overline{X}-\overline{Y}) - (\mu_X -\mu_Y)\Big)}_{(\star)} \to N(\tilde{\mu}, \tilde{\sigma}^2).$$
\begin{align} (\star) &= \sqrt{m+n}\left\lbrack \left(\frac{1}{n}\sum^n_{i=1}X_i - \mu_X\right) - \left(\frac{1}{m} \sum^m_{j=1} Y_j - \mu_Y \right) \right\rbrack\\ &= \frac{\sqrt{m+n}}{n} \sum^n_{i=1}(X_i - \mu_X) - \frac{\sqrt{m+n}}{m} \sum^m_{j=1} (Y_j - \mu_Y). \end{align} I want to transform this into something that looks like $$ \frac{1}{\sqrt{n}}\sum^n_{i=1}(X_i - \mu_X) - \frac{1}{\sqrt{m}}\sum^m_{j=1} (Y_j - \mu_Y) $$ and then use the Central Limit Theorem, but I'm not sure how to go about this. In a sort of hand-wavey way, my intuition says that if $m,n \to \infty,$ $\dfrac{\sqrt{m+n}}{n} \approx \dfrac{1}{\sqrt{n}}.$ I don't need to be super rigorous here, but I'd like to know if this is in the right direction or if I'm wrong/missing something.
For each $k\in \mathbb{N}$, we estimate $\mu_X$ and $\mu_Y$ using two independent samples $\{X_i\}_{i=1}^{n_k}$ and $\{Y_i\}_{i=1}^{m_k}$ with $n_k,m_k\to\infty$ as $k\to\infty$. Consider a positive sequence $\{a_k\}$ such that $$ \frac{a_k}{\sqrt{n_k}}\to a_1<\infty\quad\text{and}\quad \frac{a_k}{\sqrt{m_k}}\to a_2<\infty. $$ Then \begin{align} a_k(\hat{\theta}_k-\theta)&=\frac{a_k}{\sqrt{n_k}} \sqrt{n_k}(\bar{X}_{n_k}-\mu_X)-\frac{a_k}{\sqrt{m_k}} \sqrt{m_k}(\bar{Y}_{m_k}-\mu_Y) \\ &=a_1 N_1-a_2N_2+o_p(1), \end{align} where $N_1\sim \mathcal{N}(0,\sigma_X^2)$, $N_1\sim \mathcal{N}(0,\sigma_Y^2)$, and $N_1$ and $N_2$ are independent. Therefore, $$ \bbox[cornsilk,5px] { a_k(\hat{\theta}_k-\theta)\xrightarrow{d}\mathcal{N}\!\left(0,a_1^2\sigma_X^2+a_2^2\sigma_Y^2\right). } $$