Disclaimer: I struggle quite a bit with statistics, let me know if something is difficult to understand.
I have two populations $A$ and $B$ with binary values ($1$ represents "success", $0$ "failure") and sizes $N_A$, and $N_B$. My goal is to determine which population has the higher absolute number of successes.
The random variable which determines the number of successes in a sample of $n_i$ observations is $X_i \sim Hyp(N_i, K_i, n_i)$ with unknown number of successes $K_i, i\in\{A,B\}$ . Unfortunately, $K_i$ can assume any value, including $N_i$ and $0$.
From what I understood, this can be solved well with Bayesian Inference. I use the betabinomial distribution as conjugate prior with $\alpha=\beta=0.5$, and thereby determine the two posterior distributions. Finally, I subtract the posteriors from each other using convolution: $h_{Z}(K_B - K_A| x_A, x_B)$ .
Question 1: How do I interpret the outcome? My best guess would be to determine $H(p)=\int_{-\infty}^p h_Z(z)dz$. Then I would choose $A$ as having more succeeding members than $B$ if $H_z(0)<0.5$. For $H_z(0) > 0.5$, $B$ seems to have more succeeding members. Is it correct, that the result of $H_z(0)$ will also give me the probability that this decision is correct?
Is there a better way to approach this? I have read about cost functions for decisions, but I have a hard time quantifying the cost of a false decision.
Question 2: I would like to include a second layer of probabilism into the model. The members of $A$ and $B$ can only be determined as "success" and "failure" (or 1 and 0) with $P(\text{"success"} | \text{"failure"}) > 0$ (and $P(\text{"failure"} | \text{"success"}) = 0$). How can I incorporate this into the interpretation of my Bayesian Inference results? Since the two models are not independent, I am not sure how to handle this. Is the right way here simulation?
For context: To determine "success" and "failure", I use sampling to check whether all items in (another) population have a value of 1.