Probability of union of events graph

73 Views Asked by At

I'm having some issues between the intuitive understanding of the probability of the union of some events and its formula:

From this pdf, in the serial circuit example, each $A_i$ represents a circuit component with a $P(A_i)$ the probability of failure of $A_i$.

Each $A_i$ being different component, $A_i$ are then independent events.

This means that the inclusion-exclusion formula simplifies to:

$P(\bigcup A_i) = \sum P(A_i)$

Intuitively, I would expect that the more components I add, the higher the probability of failure is, and after a certain limit, the probability is as close to 1 as possible to have at least one failure in the circuit.

However, the formula does not seem to be bounded by 1 at first sight, which has to be incorrect.

Would you mind correcting me?

In particular, I suppose I incorrectly think of the probability space $S$, since I am unable to define it here.

I also would greatly appreciate a graph of $P(\cup A_i)$ with $i$ in axis (let's assume $P(A_i) = cste$). I would expect something that looks like a logarithm, with 1 as asymptote, but my formula is clearly wrong here.

1

There are 1 best solutions below

1
On BEST ANSWER

Your initial intuition is correct: If you have many possible ways something can fail, and just one failure causes the whole system to fail, then every new thing added makes it more likely the whole system will fail.

Mutually exclusive: Only one thing can happen at once

Independent: The chances of one thing happening doesn't affect the chances of another thing happening

In the example where the person will be late if the buses aren't running and the person will be late if they sleep too much: It's not mutually exclusive (you can sleep late on the same morning the buses don't run) It is independent (the buses not running don't depend on whether you slept late and whether you sleep late doesn't change if the buses are running)

So for the serial circuit, that's not the correct way to calculate it. Instead, you can multiply the probabilities that everything is working. In the example, there's an 80% chance the first part is working and a 90% chance the second part is working. By multiplying everything, you get 72% chance everything is working. In every other case, it's not working, so that's the remaining 28%

This concept doesn't graph well as you add more and more points of failure because the probabilities can be different. You can make a graph if all the probabilities are the same. Suppose you want the graph of the chance that things will fail after adding i points of failure at f% chance of failure per. This will be the graph P(i)=1-(1-f%)^i. This is the same concept as above: change everything to probability of success (1-f%), multiply all the chances of success (raised to i power) and then failure is everything else (1-result)