Migrated to MO.
Background/Motivation:
We have $Z=X/Y$ where $X$ and $Y$ are independent and $X\sim\mathcal N(\mu,\sigma^2)$. The distribution of $Y$ is not important here. We can write the distribution and density functions of $Z$ in terms of expected values w.r.t. $Y$ as $$ F_Z(z)=\mathsf E\Phi\left(\frac{z|Y|-\operatorname{sign}(Y)\mu}{\sigma}\right) $$ and $$ f_Z(z)=\mathsf E\left(\frac{|Y|}{\sigma}\phi\left(\frac{zY-\mu}{\sigma}\right)\right), $$ where $\Phi(\cdot)$ and $\phi(\cdot)$ represent the standard normal cdf and pdf, respectively. This leads to unbiased Monte Carlo estimators of the distribution and density functions. For example, given a sample $Y_1,\dots,Y_n$ we can estimate the distribution function $F_Z$ at the point $z$ via $$ \hat F_Z(z)=\frac{1}{n}\sum_{k=1}^n\Phi\left(\frac{z|Y_k|-\operatorname{sign}(Y_k)\mu}{\sigma}\right) $$ I am interested in evaluating the variance of these estimators as a function of $z$, i.e. $\mathsf{Var}(\hat F_Z)(z)$ and $\mathsf{Var}(\hat f_Z)(z)$.
Approach:
It turns out in my application $\sigma\ll\mathsf{Var}Y$ so much so that $X$ looks nearly constant in comparison to $Y$. As such, taking limit $\sigma\to 0$ in the above expressions still gives excellent approximations to the cdf/pdf of $Z$. For example, taking the limit $\sigma\to0$ in the expression for the cdf we make use of the fact that the normal cdf tends to a step function giving the approximation $$ F_Z(z)\approx\mathsf E(\mathbf 1_{z|Y|-\operatorname{sign}(Y)\mu>0}), $$ and so we have the corresponding MC estimator $$ \hat F_Z(z)\approx\frac{1}{n}\sum_{k=1}^n\mathbf 1_{z|Y_k|-\operatorname{sign}(Y_k)\mu>0}. $$ This approximation is very convenient because $\mathbf 1_{z|Y|-\operatorname{sign}(Y)\mu>0}$ is Bernoulli distributed with success probability $p=\mathsf E(\mathbf 1_{z|Y|-\operatorname{sign}(Y)\mu>0})\approx F_Z(z)$, that is we have the distributional approximation $\mathbf 1_{z|Y|-\operatorname{sign}(Y)\mu>0}\sim\operatorname{Binomial}(1,F_Z(z))$. As such we obtain the approximation $$ \mathsf{Var}(\hat F_Z)(z)\approx\frac{F_Z(z)(1-F_Z(z))}{n}. $$ This approximations turns out to be very good for my application. However, I am unable to see how to extend this idea to estimate $\mathsf{Var}(\hat f_Z)(z)$. I would think that taking the limit $\sigma\to 0$ of the density would yield something like $$ f_Z(z)\approx\lim_{\sigma\to 0}\mathsf E\left(|Y|\frac{1}{\sigma}\phi\left(\frac{zY-\mu}{\sigma}\right)\right)=\mathsf E\left(|Y|\delta(zY-\mu)\right). $$ But it's unclear to me how to proceed from here. Is this last step even correct? Thoughts?
Edit:
Given $Y$ we note that $$ |Y|\frac{1}{\sigma}\phi\left(\frac{zY-\mu}{\sigma}\right) =\frac{1}{\sqrt{2\pi}\sigma/|Y|}\exp\left(-\frac{(zY-\mu)^2}{2\sigma^2}\right) =\frac{1}{\sqrt{2\pi}\sigma/|Y|}\exp\left(-\frac{(z-\mu/Y)^2}{2(\sigma/|Y|)^2}\right), $$ which is a normal density with mean $\mu/Y$ and variance $(\sigma/|Y|)^2$. So taking the limit $\sigma\to 0$ in the expression for $f_Z$ gives $$ f_Z(z)\approx \mathsf E(\delta(z-\mu/Y)) $$ So it would seem $$ \hat f_Z(z)\approx\frac{1}{n}\sum_{k=1}^n\delta(z-\mu/Y_k) $$ and $$ \mathsf{Var}(\hat f_Z)(z)\approx\frac{\mathsf{Var}(\delta(z-\mu/Y))}{n}=\frac{\mathsf E\delta^2(z-\mu/Y)-(\mathsf E\delta(z-\mu/Y))^2}{n}\approx \frac{f_Z(z)-f^2_Z(z)}{n}. $$ This leaves us with $$ \mathsf{Var}(\hat f_Z)(z)\approx\frac{f_Z(z)(1-f_Z(z))}{n}. $$ Is this sensible/correct? It's unclear if this result is even guaranteed to be positive.