I'm working on a project whereby I'm supposed to determine whether two objects (parts of moving machinery) will be in physical contact with one another given the uncertainty in their positions. I know the covariance of both parts so it wasn't terribly difficult to formulate a probability density function. Integrating this over the size of the objects yields a probability whether they'll be in physical contact with one another. This is a rather wide-spread approach so I'm confident in it, just wanted to set the scene.
Net result is that for every time the parts get close (this is a machine with moving parts that are only aligned to within a certain tolerance) I can compute the probability they'll touch each other. The nominal position is slightly different every time (but I know it based on the operation the machine is performing), so these probabilities will generally be different.
There a couple of extra points about these individual probabilities and contact events that are relevant:
- They can all take value $P_i \in \left[0, 1\right] $, which makes me quite confident they are all in their respective sample spaces.
- Furthermore, once contact occurs the machine is stopped, so all the events are disjoint (two contacts cannot physically occur, i.e. $\forall_{i,j}P\left(C_i\cap C_j\right)=0$.
- List item
- I'm simulating the machine, so I do not know whether contact was or was not made, I only know the probability whether for a particular close approach contact would occur (computed based on the probability density function's integration described earlier).
The problem is I'd like to know the combined probability the parts will be in contact over an arbitrarily long time interval. Simply adding the probabilities of $N$ events (physical contacts $C_i$), like one would do if they were all in the same sample space (noting that the events are disjoint):
$$P\left(\bigcup_{i=1}^N C_i\right)=\sum_{i=1}^N P(C_i)$$
doesn't work - it is not uncommon for the set of events to have individual probabilities in the order of tens of percent, so the overall probability can very easily exceed $1.0$. This is quite clearly dodgy. I'd expect the combined probability to get closer and closer to $1.0$ the more times the two parts get close to one another (just like tossing a coin more times should render the likelihood of at least one heads being tossed higher).
But I fail to see how I could combine the probabilities in the form I know them to produce this overall probability figure. I feel there is some normalisation that should take place here, some of my colleagues and people on this forum share this opinion. It was even suggested in my previous question regarding this topic.
I'd really appreciate any help. Not even a solution but just a reference or a hint what to look for/books or articles to read would still be amazing. Thank you.
EDIT
There seems to be a lot of confusion as to what I mean. Based on what I always find most explanatory I've come up with a diagram of the situation.
Every event (time when the two parts get close) has an index. I do not a priori know how many events there will be and would like to keep all this general and extensible, so $i$ can be any number. Probability of an event $i$ occurring, $P_i$, is computed externally by integration of a probability density function as mentioned before.
What I want to know is what is the probability of any even occurring? Having drawn this diagram I feel that the probability of no contact occurring for an arbitrary number of close approaches $N$ is:
$$P\left(\bigcap_{i=1}^N \lnot C_i\right)=\prod _{i=1}^N \left(1-P(C_i)\right)$$
So probability of any of the contacts taking place would be:
$$1-\prod _{i=1}^N \left(1-P(C_i)\right)$$
Am I correct? Does it matter that the probabilities $P_i$ are computed in their respective sample spaces?
EXTRA INVESTIGATION RESULTS
I've set up a simple numerical example in MS Excel just to see what would happen. I seeded a random number in an interval $\left[0,1\right)$ (probability of parts getting in touch $P_i$) and computed its complement $1-P_i$ (probability of the event not occurring).
I computed a product of all the probabilities of an event not happening depending on the number of events to be included (denoted as Accumulated probability of no collision). For every examined number of close approaches examined I also computed complement of the Accumulated probability of no collision, i.e. Accumulated probability of collision as presented in the above discussion. The results are as follows and make perfect sense to me.
The only question is, however: is that approach correct? Please tell me what you think. If you could provide a reference of some sort so I could actually read on this and understand that'd be even better. Thanks.

OK, looks as though it was a bit too much to ask. So I'll just go with what I've figured out myself. Most of the reasoning is in the question itself, so I'll only limit this answer to a simple analogy that made me think that what I'm doing is OK.
If you imagine 2 consecutive fair coin tosses the tree diagram will look as follows:
If you change the probabilities not to be distributed 50-50 every time and terminate the tree every time you get a heads (H on the picture) you'll recover my situation.