Let me begin by saying that I'm not entirely sure if this is the correct forum, or if Cross Validated would be more suitable. The problem I'm about to describe is statistical in nature, but I believe that the part where I get stuck is more mathematics than data analysis.
I have a system $m_I$ that is either in state 1, 0 or -1. In order to determine which state it is in, I measure a different (related) system $N$ times, and I count how many 'positive occurances' $k$ I get. The chance of a positive occurrence is $p_i$, so this is binomially distributed. Depending on the state $m_I$, the there is a different $p_i$, so in principle I should measure three binomial distributions. These three binomial distributions have two intersections, $k1$ and $k2$. In order to decide what state of $m_I$ I'm dealing with, I simply see what distribution it was most likely to belong to by seeing on what side of the values $k_1$ and $k_2$ my measurement is.
I want to optimize the fidelity of being correct about $m_I$ which I call $F_{avg} = \frac{1}{3}(F_1+F_0+F_{-1})$, which is defined using the cumulative distribution function of the binomial distribution
$ P(X \leq k) = \textrm{Bincdf}(k,n,p) = \sum_{i=1}^{k} \left (\begin{matrix} n \\ i \end{matrix}\right )p^i(1-p)^{n-i} $
and
$ F_{1} = 1 - P(X \leq k_2) $
$ F_{0} = P(X \leq k_2) - P(X \leq k_1) $
$ F_{-1} = P(X \leq k_1) $
The fidelities are thus simply the chance that a certain $k$ belongs to the interval of one of the three binomial distributions. Here the probabilities $p$ are fixed, but $k1,k2$ vary when I vary $n$.
Now, $k_1$ and $k_2$ I can numerically calculate for a specific $N$, and I can thus also just evaluate $F_{avg}$ for various $N$. This shows, as is to be expected, that it converges to 1 very quickly as the distributions become more narrow for higher $N$. But here's where the problem comes in. What I have not included in this model is that every time I make a measurement (so I make $N$ in total), there is a chance $P$ that something goes wrong and that my approach is no longer valid. Basically, by measuring I have a chance of changing $m_I = 1$ into $m_I = 0$, and the other permutations. Now, for simplicity lets say that 1 can only go to 0, 0 to -1 and -1 to 1, all with the same chance P.
My problem is, how do I include this in my fidelities. What I first thought of was just simply multiplying each $F$ by a factor of $(1-P)^N$, which is the chance that the system has not changed in N measurements. But my feeling is that this is not correct, and that the approach is a little more intricate. Intuitively the notion of a convolution comes to mind, but I'm not sure how applicable that is?
I apologize if my story is vague. I've tried rewriting it about 4 times now, and this is as legible as I can currently make it, but that's also because I know the situation very well. So if there's any part that is particularly unclear, please let me know and I'll try to rephrase it!
The first part, i.e. without possibility of a measurement influencing the system, sounds like the perfect problem for bayesian statistics. http://en.wikipedia.org/wiki/Bayesian_statistics, http://en.wikipedia.org/wiki/Bayesian_inference.
The basic idea is that you include all the possible models in your probability space, and view particular measurement as a realization of a particular outcome in a particular model. In your case, each model corresponds to a possible state $m_i$, and each model is some bernoulli distribution with parameter $p_i$. You start out with some initial probability distribution of the state you're in. Each measurement then updates this probability distribution, and in the end your probability distribution reflects the likelyhood of the system being in each of the states $m_i$.
To incorportate the possibility of state changes during the measurement, you could for example look at the final distribution, and see if it clearly identifies one state above all others, i.e. if it's variance is low enough. If not, you'd simply continue measuring until the variance looks ok.