Compute conditional probability that may not have a joint density

70 Views Asked by At

Using the measure-theoretic definition of conditional expectation, one can show that if random vectors $X\in R^m$ and $Y\in R^n$ have a joint density with respect to the Lebesgue measure in $R^{m+n}$, then the conditional probability distribution $P_Y(\cdot |X=x)$ has density w.r.t. the Lebesgue measure in $R_m$: $f_{Y|X=x}(y)=\frac{f_{X,Y}(x,y)} {f_X(x)}$ when the denominator is positive, and equals $0$ otherwise.

However, how do we go about computing an explicit form of the conditional density, or just the conditional distribution, when X and Y do not have a joint density? For example:

(1). Suppose $X_1, ..., X_n$ are iid $N(\mu, \sigma^2)$ random variables and $T(X):=\sum_{i=1}^n X_i^2$. Although it appears intuitive that $P((X_1, ..., X_n)\in A|T(X)=t)$ gives the uniform distribution on the surface of a sphere with radius $\sqrt{t}$, how do we rigorously prove it?

(I know one can show that regular conditional probability exists in this case and the distribution concentrates on the level set $\{T(X)=t\}$, but I do not know how could one explicitly deduce it. Do transformation and Jacobian help? If so, how?)

(2). Suppose $U_1, ..., U_n$ are iid $Unif(a,b)$ random variables and $U_{(1)}$ is the smallest one and $U_{(n)}$ the largest. What is $P((U_1, ..., U_n)\setminus (U_{(1)}, U_{(n)}) \in A | U_{(1)}=l, U_{(n)}=u)$?

(where $(U_1, ..., U_n)\setminus (U_{(1)}, U_{(n)})$ is a function in $(U_1, ..., U_n)$ such that it returns the $n-2$ dimensional vector by removing the largest and smallest entries; this is well defined almost surely since there are no ties almost surely)

(3). Suppose $Y_1, ..., Y_n$ are iid $N(\mu, \sigma^2)$ random variables and $T(Y):=\sum_{i=1}^n Y_i$. What is $P((Y_1, ..., Y_n)\in A|T(Y)=t)$?