Confidence Interval for division of means of two different data sets

760 Views Asked by At

I have two sets of data (unpaired). Set $A$ has a mean of $157$, $n=54, SD=54.0, SEM=7.3$. Set $B$ has a mean of $704$, $n=42,SD=142.4,SEM=22.0$. $A$ and $B$ are both approximately normally distributed.

I want to determine a 95% confidence interval for the ratio of the means for the mean of $A$ divided by the mean of $B$, which in this case is $157/704=.223$.

I'm not sure how to go about this. The only idea I've come up with so far is to fit $A$ and $B$ with a normal and binomial distribution, respectively, and then plug them into the propagation of uncertainty formula:

$$ f=\frac{A}{B}, \space \sigma_f^2 \approx f^2 \left[\left(\frac{\sigma_A}{A}\right)^2 + \left(\frac{\sigma_B}{B}\right)^2 - 2\frac{\sigma_{AB}}{AB} \right] $$

Is this the right thing to do? Any advice would be appreciated