Let $X$ and $Y$ be independent random variables having cumulative distribution functions $F(t)$ and $G(t)$ respectively. Suppose that $(1-G(t))=(1-F(t))^\alpha, \forall t>0$ and $\alpha>0$. Prove that $Z=\min(X,Y)$ and $\delta = \begin{cases}1, &X\le Y \\ 0, & X>Y\end{cases}$ are independent random variables.
I have no clue, seems like it is using a lot many concepts. Please help
$Z$ and $\delta$ are independent iff
$$P(Z>z \;\cap\;\delta=d) = P(Z>z)P(\delta=d)\quad\forall z >0,d \in \{0,1\}$$
First let's calculate the individual event probabilities: $$P(Z>z) = P(X>z \; \cap\; Y>z) = (1-F(z))(1-G(z)) = (1-F(z))(1-F(z))^{\alpha}=(1-F(z))^{\alpha+1}$$
That one was pretty straightforward, here's where the peculiar distributional form given in the problem comes in handy: $$P(\delta =1) =P(X\leq Y) =\int_0^{\infty}\int_x^\infty p(X=x,Y=y)\;dxdy$$
By independence of $X$ and $Y$ we can split the joint density:
$$\int_0^{\infty}\int_x^\infty p(x,y)\;dxdy = \int_0^{\infty}\int_x^\infty f(X=x)g(Y=y)\;dxdy$$
The integrand nicely separates by variable, which allows us to recast it in terms of the CDFs provided in the problem:
$$\int_0^{\infty}f(x)\int_x^\infty g(y)\;dxdy=\int_0^{\infty}f(x)\left[1-G(x)\right]\;dx$$
The last step is from the definition of the CDF for $Y$. We can go one step further and use the relationship between $1-G(t)$ and $1-F(t)$:
$$\int_0^{\infty}f(x)\left[1-F(x)\right]^{\alpha}\;dx$$
Even though we don't know the functional form of $F(x)$, this integrand has the form of $h'(x)(1-h(x))^{\alpha}$ which allows use of $u$ substitution trick:
$$u=1-F(x) \implies du=-f(x)dx$$
This turns the integral into something we can explicitly solve:
$$\int_0^1 u^{\alpha} \;du = \left[\frac{u^{\alpha+1}}{\alpha+1}\right]_0^1 = \frac{1}{1+\alpha} $$
Finally, we need to show $P(Z>z \;\cap\;\delta=d) = P(Z>z)P(\delta=d)$. Since we only have two states for $\delta$, we can show this by checking that $P(Z>z \;\cap\;X \leq Y) = P(Z>z)P(X \leq Y)$ and rely on the following theorem from probability theory:
$$P(A\;\cap\;B) = P(A)P(B) \implies P(A\;\cap\;B^c) = P(A)P(B^c)$$
Simple proof:
$$P(A) = P(A\;\cap\;B) + P(A\;\cap\;B^c) \implies P(A\;\cap\;B^c) = P(A) - P(A\;\cap\;B)=P(A) - P(A)P(B) = P(A)(1-P(B))=P(A)P(B^c) \qquad \square$$
Calculating $P(Z>z\;\cap\;X\leq Y)$:
The argument goes very much like the general calculation for $P(X\leq Y)$, except we are changing the first limit of integration for $X$:
$$P(Z>z\;\cap\;X\leq Y) \equiv P(X>z\;\cap\;X\leq Y)= \int_z^{\infty}\int_x^\infty f(x)g(y)\;dxdy$$
By similar arguments as before, we see that this integral evaluates to:
$$\left[\frac{u^{\alpha+1}}{\alpha+1}\right]_0^{1-F(z)}$$
Carrying out the calcs gets us:
$$\frac{(1-F(z))^{\alpha+1}}{\alpha+1} = P(Z>z)P(X\leq Y)\;\quad \square$$