Evaluating an expected value in Jeffrey's prior for binomial distribution

128 Views Asked by At

The material I'm reading derives Jeffrey's prior (or rather, the Fisher information for the Jeffrey's) for single-parameter binomial distribution in a manner quite similar to this Wikipedia article.

I could work out the steps until (following Wikipedia's notation, $A$ is number of successes, $B$ failures, $A+B$ total number of trials)

$$E [\frac{A}{\theta^2} + \frac{B}{(1-\theta)^2}] \\ = \frac{E[A]}{\theta^2} + \frac{E[B]}{(1-\theta)^2} $$

Maybe my background in probability calculus is just lacking, but I'm not exactly sure about the justification for this step. $E[\frac{A}{\theta^2}] + E[\frac{B}{(1-\theta)^2}]$ follows from the linearity properties of expected value, but the next step? Are we treating $\frac{1}{\theta^2}, \frac{1}{(1-\theta)^2}$ as constants?