If we're using Bayesian inference in two situations where everything is the same, except that the prior in one is n times the prior in the other, is there anything we can say about how the posteriors will compare? Beyond that it will be bigger if n>1?
I guess this isn't really Bayes-specific... just is there anything we can say about how $\frac{\mathrm{xy}}{\mathrm{xy+z(1-y)}}$ and $\frac{\mathrm{xny}}{\mathrm{xny+z(1-ny)}}$ compare where $\mathrm{0<x<1}$, $\mathrm{0<y<1}$, and $\mathrm{0<z<1}$?
Also... say we're going to quarantine animals if the probability they're infected is greater than .6, and the base rate of infection is higher in dogs than in cats, but our other evidence bears equally well on dog and cat infection (for all the kinds of evidence, p(e|infected dog)=p(e|infected cat) and p(e|uninfected dog)=p(e|uninfected cat)). The higher prior means there will be a higher "false positive" (quarantined but uninfected) rate for dogs, right?
eta: I guess this depends on the particular priors and the P(E|H) P(E|~H) of the different kinds of evidence we're considering. But assuming there's some combination of evidence where we'd reach the threshold if it was a dog but not a cat, an uninfected dog would be more likely to be mistakenly quarantined than an uninfected cat... Right?
I really appreciate any help. It's been a while since I've done math and I can't figure out what to google.