Suppose I have $n$ random variables $X_1, X_2, X_3, ..., X_n$ such that $X_i =\begin{cases} 1 & p_i \\[5pt] 0 & 1-p_i \end{cases}$.
I would like to find the probability that $$P(X_j = 1 \ \vert \ \text{Only one of } X_i=1)$$
This is an abstraction of a likelihood function$^1$ I am trying to find, but you can also think of it a bit like winning a lottery with only one prize, but everyone has different chances of winning.
My source tells me it's $$\frac{p_j}{\sum_{i=1}^n p_i} \tag{1}$$
but I can't quite understand how they arrived at this.
I feel like it should be more like $$\frac{\frac{p_j}{1-p_j}}{\sum_{i=1}^n \frac{p_i}{1-p_i}} \tag{2}$$
however the result is quite a famous one (Cox's Partial Likelihood function) so I'm pretty sure the answer should be $(1)$.
Is there some interpretation of the fractions $\frac{p_{j}}{1-p_j}$? Are they some sort of rescaled probabilities?
The actual context uses conditional densities $f_{X_i \vert X_i \ge x} (x)$, not probabilities. Might that fix the matter?
1 Cox regression, partial likelihood function, Equation $(12)$
$p/1-p$ is called the odds ratio. Your intuition is correct, working with densities changes things a little bit. The “equivalent” of the odds ratio with a continuum of results is called the hazard rate, and Eq (12) of Cox’s paper is just (2) adapted to a continuum of states. That $\lambda$ is not a probability itself but rather the hazard rate, which is the derivative of the log of the probability function.