I think there is a correct way of representing confidence in the below table, but I was never strong at statics. This is based on a practical problem.
In reality, the "upgrade" has gone through a number of QA tests, and in fact should not cause any breakages. In practice an upgrade sometimes causes failures. What I want to do is minimise the impact of such a failure.
- I have 1000 computers.
- I want to upgrade these computers with an identical upgrade.
- There is an unknown chance that this upgrade will break a computer.
- I want to do it efficiently (in as few rounds as possible).
- I want to define my confidence in each round of the likelihood that a computer will break.
A long time ago I was told the "best" way to upgrade a large number of computers was a "doubling" method (geometric progression).
My specific question is:
A) How do I measure my confidence in the upgrade (IE> I want to know (probabilistically) how many computers will break). In the below table I think the confidence level is far too low. OR How to I report probability of a failure occurring (which is probably more important, but perhaps is just the inverse of the confidence.
B) Is this actually (provably) efficient?
2(n) Num of upgrades upgrades Confidence level
per round completed
1 2 2 0.2%
2 4 6 0.6%
4 8 14 1.4%
8 16 30 3.0%
16 32 62 6.2%
32 64 126 12.6%
64 128 254 25.4%
128 256 510 51.0%
256 512 1000 100.0%
Further notes.
The idea is that if a breakage happens, you fix the upgrade, and restart from the beginning. If I had a formula then I could re-calculate based on the reduced total population (as some would already be upgraded
Not to over complicate, but what if I have, 1% VVIP and 10% VIP, I assume I just leave them until the end of the process to have lowest probability of failure.
Thank you!