I have seen the following expression in many of the research papers:
$P_o = \text{Pr}\left(\gamma_r<\gamma_s(1+\gamma_e)-1\right)$ -----(1)
where $\gamma_r,\gamma_e$ are random variables and all others are constants.
$F_{\gamma_r}(\cdot)$ is the CDF and $f_{\gamma_e}(\cdot)$ is the PDF
$P_o = \int_0^{\infty}F_{\gamma_r}(\gamma_s(1+\gamma_e)-1)\cdot f_{\gamma_e}(\gamma_e)\text{d}\gamma_e$ -----(2)
My query is how (2) is written on the basis of (1).
I will appreciate any help regarding this.
Reusing $\gamma_e$ for both the random variable and a variable of integration might be the source of confusion. You should use a fresh variable for the bound variable.
$$\begin{align}P_o &=\mathbb P(\gamma_r\leqslant \gamma_s(1+\gamma_e)-1)\\[1ex] &=~ \int_0^{\infty}\mathbb P({\gamma_r}\leqslant\gamma_s(1+\mathfrak z)-1)\cdot f_{\gamma_e}(\mathfrak z)\,\textrm{d}\mathfrak z \\[1ex] &=~ \int_0^{\infty}F_{\gamma_r}(\gamma_s(1+\mathfrak z)-1)\cdot f_{\gamma_e}(\mathfrak z)\,\textrm{d}\mathfrak z\tag 2\end{align}$$
This is just the law of total probability, for independent continuously distributed random variables.
Let $X,Y$ be any such random variables, and $a,b$ be constants, then: $$\begin{align}\mathbb P(Y\leqslant aX+b)~&=~\int_\Bbb R \mathbb P(Y\leqslant ax+b)\cdot f_X(x)\,\mathrm dx\\[1ex]&=\int_\Bbb R F_Y(ax+b)\cdot f_X(x)\,\mathrm d x\end{align}$$