Monte Carlo simulation: confidence interval of the ratio of two beta distributions

148 Views Asked by At

I am trying to estimate the accuracy of a set of Monte Carlo simulation where my result is

\begin{equation} C=1-\frac{P_X(1)}{(P_Y(1))^2} \end{equation}

and $X$ and $Y$ are the results of two separate experiments, so they are independent. In particular, both simulations only outputs zeros or ones so what I am measuring is the probability of getting a 1 in each of the two indipendently. To give an example, it's as if as you have an unfair coin and you are trying to determine its balancing by throwing it many times.

I already know that the conjugate prior of $X$ and $Y$ is a beta distribution. I am trying to compute how accurate my results can be given a sample of size $N$, and I would be satisfied if I could find analytically the variance $\text{Var}(C)$.

I started by simplifying a bit and assume beta symmetric, which yields $\text{Var}(X) = \frac{\mu (1-\mu)}{1 + N_r}$ where $\mu$ is mean and $N_r$ is the size of the sample. Similarly, $\text{Var}(Y^2)$ itself shouldn't be a problem. If it was by itself, considering that the sample size is large, I could approximate it with a Gaussian and compute the variance of its square as in Mean and variance of Squared Gaussian: $Y=X^2$ where: $X\sim\mathcal{N}(0,\sigma^2)$? .

However, I already know the variance and average of the inverse of a normal do not exist, so how do you compute the variance on this ratio? Also, if you think that trying to compute the variance is not the right approach, feel free to suggest something better.

1

There are 1 best solutions below

9
On BEST ANSWER

If I correctly understand the things posted in comments, you have $X,Y\sim\text{i.i.d.} \operatorname{Uniform}(0,1)$ and two sequences of Bernoulli random variables, both conditionally i.i.d. given $X,Y,$ with respective probabilities $X$ and $Y$ of being equal to $1.$

The conditional distributions of $X,Y$ given the numbers of $1\text{s}$ and $0\text{s}$ for both of those sequences are given by \begin{align} & \Pr(X\in A\mid\alpha,\beta,\gamma,\delta) = \int_A \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} x^{\alpha-1} (1-x)^{\beta-1}\, dx \text{ for } A \subseteq[0,1] \\[8pt] & \Pr(Y\in A\mid\alpha,\beta,\gamma,\delta) = \int_A \frac{\Gamma(\gamma+\delta)}{\Gamma(\gamma)\Gamma(\delta)} x^{\gamma-1} (1-x)^{\delta-1}\, dx \text{ for } A \subseteq[0,1] \\[6pt] & \text{where } \alpha,\beta \text{ are the respective numbers of 1s and 0s from the first} \\ & \text{sequence and } \delta,\gamma \text{ from the second.} \end{align} (This result is central to the famous posthumous paper of Thomas Bayes published in 1763.)

As an estimate of $C = 1 - \dfrac X {Y^2}$ my first thought is \begin{align} & \operatorname E(C\mid\alpha,\beta,\gamma,\delta) \\[8pt] = {} & \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \cdot \frac{\Gamma(\delta+\gamma)}{\Gamma(\delta)\Gamma(\gamma)} \iint\limits_{[0,1]^2} \left( 1 - \frac x{y^2} \right) x^{\alpha-1} (1-x)^{\beta-1} y^{\gamma-1} (1 - y)^{\delta -1} \, d(x,y) \\[8pt] = {} & 1 - \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \cdot \frac{\Gamma(\delta+\gamma)}{\Gamma(\delta)\Gamma(\gamma)} \cdot \iint\limits_{[0,1]^2} x^\alpha (1-x)^{\beta-1} y^{\gamma-3} (1 - y)^{\delta -1} \, d(x,y) \\[8pt] = {} & 1 - \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \cdot \frac{\Gamma(\delta+\gamma)}{\Gamma(\delta)\Gamma(\gamma)} \left( \frac{\Gamma(\alpha+1)\Gamma(\beta)}{\Gamma(\alpha+1+\beta)} \cdot\frac{\Gamma(\gamma-2) \Gamma(\delta)}{\Gamma(\gamma-2+\delta)} \right) \\[8pt] = {} & 1 - \frac{(\gamma+\delta-2)(\gamma+\delta-1) \alpha }{(\alpha+\beta)(\gamma-2)(\gamma-1)}. \end{align}