I'm not sure how relevant the first few parts are, but I will post it just in case...
$(X_i,Y_i), i=1,\dots,n$ are independent where $X_i$ has an exponential distribution $\mathcal{E}(\lambda_i)$ with density $p(x,\lambda_i) = \lambda_i e^{-\lambda_i x}, x>0$, and $Y_i$ is independent of $X_i$ with the exponential distribution $\mathcal{E}(\theta\lambda_i),\theta>0$
In the first part, I showed that the maximum likelihood estimates of $(\lambda_1,\dots,\lambda_n,\theta)$ are $$\hat{\lambda}_i = \frac{2}{X_i+\hat{\theta}Y_i}, i=1,\dots,n$$ and $\hat{\theta}$, which uniquely solves $g(\theta)=0$, where $$g(\theta) = \frac{1}{n} \sum\left(\frac{X_i}{X_i+\theta Y_i} - \frac{1}{2}\right)$$
After a few more parts of the problem where I found the Fisher information bound for $\theta$ and showing that $U_i = X_i/(X_i + \theta Y_i)$ has the same distribution as $\mathcal{U}([0,1])$...
Now, suppose that $\hat{\theta} \xrightarrow{p} \theta$. We want to show that $$\sqrt{n}\left( \hat{\theta} - \theta \right) \xrightarrow{d} \mathcal{N}(0,3\theta^2)$$ by using the Taylor expansion $g(\hat{\theta}) = g(\theta) + g'(\theta)\left( \hat{\theta} - \theta \right) + o_p\left(\hat{\theta} - \theta\right)$.
I don't really know how to approach the last part... My gut says that the delta method will be useful, and CLT might come into play. The Fisher bound (or Cramer-Rao) says that $\text{var}(\hat{\theta}) \geq \frac{\theta^2}{n}$, but I don't think this will be helpful in any way.
I don't really know what the etiquette is on answering your own question (whether it's frowned upon to answer your own question), but I'll post my solution/work anyway...
So, we have $\displaystyle g(\theta) = \frac{1}{n}\sum\left(\frac{X_i}{X_i+\theta Y_i} - \frac{1}{2} \right) = \frac{1}{n} \sum \left( U_i - \frac{1}{2} \right)$. We verified that $g\left(\hat{\theta}\right)=0$. So, we have $$ \sqrt{n}\left(\hat{\theta} - \theta\right) = -\frac{\sqrt{n}}{g'(\theta)} \left( g(\theta) + o_p\left(\hat{\theta} - \theta\right)\right).$$ Now, $$g'(\theta) = -\frac{1}{n}\sum \frac{X_iY_i}{(X_i+\theta Y_i)^2} = -\frac{1}{n} \sum U_i \frac{Y_i}{X_i + \theta Y_i} = -\frac{1}{n\theta} \sum \left(U_i - U_i^2\right).$$ By the Central Limit Theorem, we have $$\frac{\sum \left(U_i - \frac{1}{2}\right)}{\sqrt{n}\sqrt{\frac{1}{12}}} \xrightarrow{d} \mathcal{N}(0,1),$$ and by the Weak Law of Large Numbers, we have $$\frac{\sum \left(U_i - U_i^2\right)}{n} \xrightarrow{p} \mathbb{E}U_i - \mathbb{E}U_i^2 = \frac{1}{6}.$$ Therefore, we have $$-\frac{\sqrt{n} g(\theta)}{g'(\theta)} = \frac{\sqrt{\frac{1}{12}} \frac{\sqrt{n}g(\theta)}{\sqrt{\frac{1}{12}}}}{\frac{1}{\theta} \frac{\sum\left(U_i-U_i^2\right)}{n}} \xrightarrow{d} \frac{\sqrt{\frac{1}{12}}}{\frac{1}{\theta}\cdot\frac{1}{6}} \mathcal{N}(0,1) = \sqrt{3}\theta\mathcal{N}(0,1) = \mathcal{N}\left(0,3\theta^2\right),$$ and so, $$\sqrt{n}\left(\hat{\theta} - \theta\right) \xrightarrow{d} \mathcal{N}\left(0,3\theta^2\right)$$