How can I motivate the derivation of the $p$ below?
From: https://en.wikipedia.org/wiki/Logistic_regression#Logistic_model
$l = log_b\frac{p}{1-p}=\beta_0 + \beta_1x_1+\beta_2x_2 $
$p = \frac{b^{\beta_0 + \beta_1x_1+\beta_2x_2}}{b^{(\beta_0 + \beta_1x_1+\beta_2x_2)} +1}= \frac{1}{1+b^{-( \beta_0 + \beta_1x_1+\beta_2x_2 )}} $
Given that:
$p $= probability of event $Y$=1
$l$ = logit or log-odd of $p$
$x$ = predictors or independent variables
I don't understand how the p above arrived at $ \frac{b^{\beta_0 + \beta_1x_1+\beta_2x_2}}{b^{(\beta_0 + \beta_1x_1+\beta_2x_2)} +1}= \frac{1}{1+b^{-( \beta_0 + \beta_1x_1+\beta_2x_2 )}} $
You start by assuming a linear form for the log odds, i.e., $$ \ln \left( \frac{p}{1-p} \right) = \beta^Tx $$ thus, exponentiating both sides you get $$ \frac{p}{1-p} = e^{\beta^Tx} $$ re-arranging the equation $$ e^{\beta^Tx} - pe^{\beta^Tx} = p\to p(1+e^{\beta^Tx})=e^{\beta^Tx} $$ finally, $$ p = \frac{e^{\beta^Tx}}{1+e^{\beta^Tx}} $$