Any Idea why I keep getting $3/4$ ?
Let $X_{1}, X_{2}, X_{3},$ be independent, $U(0, 1)$-distributed random variables and $X_{(1)}, X_{(2)}, X_{(3)}$ be the corresponding order variables.
Compute (a) : $P(X_{(1)} + X_{(3)} \le 1),$ I can't seem to get the book answewr which is 0.5.
I'm proceeding as the following:
I set: $$U=X_{(1)} + X_{(3)}, V=X_{(1)}$$ $f_{U,V}(u,v)=f_{X_{(1)}X_{(3)}}(x_1,x_{3})=\int_0^{x_3}6d_{x_2}.|J|$ with J=1
I get that $f_U(u)=\int_0^{u/2}6(u-v)dv$
after the calculation I get that $f_{U}(u)=\frac{9}{4}u^2$ and $F_U(1)=3/4$ which is different from the 1/2 in the book.
Update: I think I got an error when looking for the range of $x_2$. It is known that: $0<x_1<x_2<x_3<1$ When integrating with respect to $x_2$, I get confused as to why in this case we take $x_1<x_2<x_3$ as opposed to some case where we jump some variables
As for example with the same setting as above and one more varible and order statistic, for the range $0<x_1<x_2<x_3<x_4<1$ when looking for $f_{x_{(3)},x_{(4)}}(x_3,x_4)$ $$f_{x_{(3)},x_{(4)}}(x_3,x_4)=\int_0^{x_3} \int_0^{x_2} f_{x_{(1)},x_{(2)},x_{(3)},x_{(4)}}(x_1,x_2,x_3,x_4) dx_1dx_2$$ instead of $$f_{x_{(3)},x_{(4)}}(x_3,x_4)= \int_{x_2}^{x_3} \int_0^{x_2} f_{x_{(1)},x_{(2)},x_{(3)},x_{(4)}}(x_1,x_2,x_3,x_4) dx_1dx_2$$
The order variables are not at all independent, for example $P(X_{(1)} > X_{(2)})=0$ and similar relations between the other order variables.
To get you started, let's find the joint distribution of $X_{(1)},X_{(3)}$, that means
$$p(x_1,x_3) = P(X_{(1)} < x_1 \land X_{(3)} < x_3),$$
where we restrict $x_1,x_3$ to the interval $[0,1]$ and $(x_1 \le x_3)$.
The trick is to express this as $p(x_1,x_3) = P(X_{(3)} < x_3)-P(X_{(1)} \ge x_1 \land X_{(3)} < x_3) $ and both terms on the right hand side are relatively easy to evaluate!
$X_{(3)} < x_3$ is equivalent to $(X_1 < x_3 \land X_2 < x_3 \land X_3 < x_3)$, just remember that $X_{(3)}$ is the maximum of $X_1,X_2$ and $X_3$.
Since the 3 conditions concern different and independent variables, we get
$$P(X_{(3)} < x_3) = P(X_1 < x_3 \land X_2 < x_3 \land X_3 < x_3)=P(X_1 < x_3)P(X_2 < x_3)P(X_3 < x_3) = x_3^3.$$
With the same argument, just applied to the minimum, we get $X_{(1)} \ge x_1$ is equivalent to $(X_1 \ge x_1 \land X_2 \ge x_1 \land X_3 \ge x_1)$ so we get
$$\begin{array}{} (X_{(1)} \ge x_1 \land X_{(3)} < x_3) & = & (X_1 \ge x_1 \land X_2 \ge x_1 \land X_3 \ge x_1) \land (X_1 < x_3 \land X_2 < x_3 \land X_3 < x_3) \\ & = & (X_1 \ge x_1 \land X_1 < x_3) \land \\ & & (X_2 \ge x_1 \land X_2 < x_3) \land \\ & & (X_3 \ge x_1 \land X_3 < x_3), \\ \end{array} $$ by using the fact that $\land$ is commutative and associative. Each line in the last equation deals with one variable, the variable in each line is independent of other lines, so we can again use the product rule to find it's probability:
$$P(X_{(1)} \ge x_1 \land X_{(3)} < x_3) = \\P(X_1 \ge x_1 \land X_1 < x_3)P(X_2 \ge x_1 \land X_2 < x_3)P(X_3 \ge x_1 \land X_3 < x_3)=(x_3-x_1)^3.$$
Combining the 2 results, we finally get the joined probability:
$$p(x_1,x_3)=x_3^3 - (x_3-x_1)^3=x_1^3-3x_1^2x_3+3x_1x_3^2$$
The remainder is standard technique: The density of the joint distribution of $X_{(1)},X_{(3)}$ is
$${\partial p(x_1,x_3) \over \partial x_1 \partial x_3} = 6(x_3-x_1)$$
and then you have to calculate the double integrel
$$\int_0^{1\over 2}\int_{x_1}^{1-x_1} 6(x_3-x_1)dx_3dx_1.$$
Note that the lower boundary of the $x_3$ integral is $x_1$, as there is no mass in the distribution at a point where the maximum of 3 values is greater than the minimum of those 3 values. That's also why the $x_1$ integral stops at $1 \over 2$: If the minimum gets over $1 \over 2$, the sum of minimum and maximum must be greater than $1$.
Doing the integrals is not complicated and the final result is actually $1 \over 2$.
I assume there is some symmetry argument that gives the result more easily, but I couldn't find a convincing one.