I've been working on some basic probability problems. Two results that can be proved for finite or 'nice' (i.e. convergence works out nicely) event spaces by summing over one or more random variables are:
(1) Suppose P is a probability measure, X, Y, Z are random variables with W= X union Y union Z, and P(W)=P(X,Y,Z)=phi_1(X,Z)*phi_2(Y,Z). Then X is independent of Y given Z under P. (Similar to the proof of Lemma 4.2.7 at http://www.math.ntu.edu.tw/~hchen/teaching/StatInference/notes/lecture23.pdf)
(2) Prove that if X is independent of (Y and W) given Z (i.e. P(X|W,Y,Z)=P(X|Z), then X is independent of Y given Z. (In this case start with the hypothesis then sum over w and the conclusion drops out after a few lines.)
Now, there might be elegant particular ways to solve either of these, and those are welcome. But my main question is if the approach common to both of these (and probably lots of other results) of summing/integrating over the possible values of the random variables can be converted into a more powerful proof method that works without having to worry about whether the sums are nicely convergent.
For example, instead of working to prove directly by summing, maybe there is a strictly more general method involving proof by contradiction; show that if it fails for some tuple of X,(W),Y,X, then something breaks?
Of course, maybe the results don't hold for non-nice event spaces, or the generalisations require measure theory to do rigorously (maybe we can make some statement about all but countably many events having zero measure and ignore them otherwise the marginal sum exceeds unity). But even a negative result or pointer to the relevant machinery in deeper theory would be welcome. Cheers.