Rationale behind Bayes rule for continuous random variables

179 Views Asked by At

I was viewing a problem in Probability where there are two independent random variables $X,Y$ which were exponentially distributed with parameter $\lambda$ , $Z$ is r.v. such that $Z=X+Y$

I solved the problem and found $f_{X|Z} (x|z)$ $$f_{X|Z} (x|z) = \frac{ \lambda ^2 e^{-\lambda z}}{f_Z(z)} \qquad , 0 \le x \le z $$ and got the correct answer but I could not understand the meaning of the result, I mean how knowing the value of $Z$ will make the distribution of $X$ uniform? that is all outcomes became equally likely? as $f_{X|Z} (x|z)$ does not depend on $x$, so could anyone helps me giving a good reasoning for why this happened?

1

There are 1 best solutions below

0
On

I don't have a good reasoning in the continuous case, but here is some reasoning in the discrete one. Hopefully, it gives an idea of why this is happening.

A discrete counterpart of exponential distribution is the geometric one, which means the number of the first success (heads) when flipping a (possibly, unfair) coin independently. So let $X$ and $Y$ be independent geometrically distributed with the same success probability $p$. Then the variables $X$ and $Z = X+Y$ can be understood as the number of the first and the second success when flipping a coin. And the question is: given that the second success is on the $n$th flip, what is the distribution of the first one? Well, due to the complete symmetry in the trials it must be obvious that the first success is equally likely to happen on the 1st, 2nd, ..., $(n-1)$th flip, so it is uniformly distributed on $\{1,2,\dots,n-1\}$.