I have M bernoulli trials each of which is independent and with probability of successes given by $\left[p_{1}, p_{2}, \dots, p_{M}\right].$
I am trying to find an estimate for the number of successes in M trials.
My initial idea was $\sum_{i=1}^{M} p_{i}$ will be a "good" estimator because that is the expected number (mean) of successes.
But, now I am thinking this is just the MMSE estimator. So, If I am having a different cost function than mean squared error, this may not be a good estimator.
In my case, the cost function is such that, it exponentially increases if the estimation error is negative (i.e., if we underestimate) and it linearly increase if we overestimate. (For example, the left half of the cost function plot will look like that of MSE and right half will look like that of a mean absolute error cost.)
Given this cost function, can someone help me derive the best estimate for the number of successes?
A related problem is to find the probability mass function i.e. the probability for every possible number of success. The expected value is trivially the dot product of the PMF and the integers 0...M.
You could calculate this PMF directly in at most $O(M^2)$ computational and $O(M)$ memory complexity.
Iteratively convolve the previous PMF with the
[P(fail), P(success)]for the $i^{th}$ term.e.g. $$ F_0 = [1] \\ F_1 = [ (1-p_1),p_1 ] \\ \vdots \\ F_i = F_{i-1} \ast [ (1-p_i) ,p_i] $$
For extra credit, I think you could make this $O(M \log M)$ processing by using fast convolution on longer subsequences.