[Edits made for clarification and brevity.]
I'm working on an idea for a fault detection algorithm, and I've boiled it down (I think) to the following problem.
A box contains 10 balls. The balls can be either red or blue. We know nothing about how the colors of the balls was determined. Drawing three times from the box (with replacement) yields three red balls. Four times, four red balls. Etc. How can I quantify my increasing confidence that there are few (or no) blue balls in the box?
I am familiar with using probability distributions to get confidence intervals, but since my assumption is that the distribution is unknown that generated the ratio of colors, I can't compute the conditional. Even recommended reading material would be fantastic.
ONLY FOR FURTHER CONTEXT: the original problem is that I have a Markov process in which a fault may occur at any time step. The fault is non-fatal (the process continues), and the fault goes undetected unless a check is made during that time step. Making a check is guaranteed to detect a fault if it occurred in that step. Absolutely nothing is known about the probability of a fault, except that it is time-independent. If a check is made and a fault is detected, the process will stop (i.e., it's an absorbing state).
I want to tune the frequency (probability) of making checks in order to achieve an arbitrary confidence level in the integrity of the system. If I want to arrive at, say, a 95% confidence that fewer than 5% of the time steps contain a fault, how often must I make checks?
My gut says this isn't that hard a problem, but I'm not sure how to approach it. If you have any pointers, even reading material, I'd appreciate it.
Thanks in advance!