I am trying to understand an example from my textbook. I know that it is conditional probability and joint density, but I don't know on which theory to base myself to solve it.
Let $Z$, be a random variable with distribution $\Gamma \left( z \right) = (2,1)$ and $X$ another random variable such that its distribution given conditional $Z=z$ is $U[0,z]$. Calculate: $$ P(Z \geq 2\mid X\leq 1) $$
Other question, how do I identify if they are continuous or discrete variables? I appreciate your help.
Why do you denote the gamma distribution as $\Gamma(x)$? Simply write $Z\sim \Gamma(2,1)$.
Bayes' rule tells you that $$P(Z\geq 2 | X\leq 1)=\frac{P(Z\geq 2,X\leq 1)}{P(X\leq 1)}.$$
The joint probability of $(X,Z)$ is given by $$f_{(X,Z)}(x,z) = f_{X|Z}(x|z)f_Z(z),$$ where $f_{X|Z}(x|z) = \frac{1}{z} {\bf 1}_{[0,z]}(x)$ and $f_Z(z)=ze^{-z}{\bf 1}_{[0,\infty)}(z)$. Hence, $$P(Z\geq 2, X\leq 1) = \int_{2}^\infty \int_{0}^1 \frac{1}{z} {\bf 1}_{[0,z]}(x)ze^{-z}{\bf 1}_{[0,\infty)}(z) dxdz=\int_{2}^\infty \int_{0}^1 e^{-z}dxdz =e^{-2}.$$ In the last part we used that $$\int_0^1 {\bf 1}_{[0,z]}(x) dx = 1 \quad \mbox{since } z\geq 2.$$
On the other hand, $$P(X\leq 1)= \int_0^\infty \int_0^1 \frac{1}{z} {\bf 1}_{[0,z]}(x)ze^{-z}{\bf 1}_{[0,\infty)}(z) dxdz = \int_0^\infty \int_0^1 {\bf 1}_{[0,z]}(x)e^{-z}{\bf 1}_{[0,\infty)}(z) dxdz =$$ $$=\int_0^1 \int_0^1 {\bf 1}_{[0,z]}(x)e^{-z}{\bf 1}_{[0,\infty)}(z) dxdz + \int_1^\infty \int_0^1 {\bf 1}_{[0,z]}(x)e^{-z} dxdz =$$ $$=\int_0^1 ze^{-z} dxdz + \int_1^\infty e^{-z} dxdz =1-2e^{-1}+e^{-1} = 1-e^{-1}.$$
Altogether,
$$P(Z\geq 2 | X\leq 1)=\frac{P(Z\geq 2|X\leq 1)}{P(X\leq 1)}=\frac{e^{-2}}{1-e^{-1}} = \frac{1}{e(e-1)}.$$
The idea and procedure is clear: use Bayes' rule, find the joint and marginal distributions and then use the definition of probability. The only tricky part is to be careful with the values of $X$ and $Z$ and the integration regions. I hope this helps, good luck!