I have the following estimator :
$\hat{O}=\dfrac{b_{sp}^{2}\left(\mathcal{D}_{DM}+B^{C}\right)+B_{sp}}{b_{ph}^{2}\left(\mathcal{D}_{DM}+B^{C}\right)+B_{ph}}$
I would like to demonstrate that the mean of this estimator is :
$<\hat{O}>=<\left(\dfrac{b_{sp}^{2}\left[\mathcal{D}_{DM}+B^{C}\right]+B_{sp}}{b_{ph}^{2}\left[\mathcal{D}_{DM}+B^{C}\right]+B_{ph}}\right)>=\left(\dfrac{<b_{sp}^{2}\left[\mathcal{D}_{DM}+B^{C}\right]>+<B_{sp}>}{<b_{ph}^{2}\left[\mathcal{D}_{DM}+B^{C}\right]>+<B_{ph}>}\right)$
and conclude, by knowing that $<B_{sp>}>=<B_{ph}>=0$ : $<\hat{O}> = \dfrac{b_{sp}^{2}}{b_{ph}^{2}}$
But I don't know how to justify the equality between the mean of ratio and the ratio of mean with my estimator $\hat{O}$.
UPDATE 1:
If I reformulate the definition of my estimator $\hat{O}=A/B$ like this :
$$A = constant_1 + A'$$ $$B = constant_2 + B'$$
with $A'$ and $B'$ random variables.
with the specifications : $<A'> = <B'> = 0$ (for average) and for standard deviation : $\sigma_{A'}$ and $\sigma_{B'}$ both different from $0$.
So, I could have with a first taylor series expansion :
$E(A/B) = \dfrac{E(A)}{E(B)} [ 1 - \dfrac{Cov(A,B)}{(E(A)E(B))} + \dfrac{Var(B)}{|E(B)|^2)}$
$E(A/B) = \dfrac{constant_1}{constant_2} \bigg( 1 - 0 + \dfrac{Var(B')}{constant_{2}^{2}}\bigg) = \dfrac{constant_1}{constant_2} \bigg(1 + \dfrac{Var(B')}{constant_2^2}\bigg)$
with Var(B') known and different from $0$ and $constant_2$ also known.
- from point 2) above, could I conclude that my estimator $\hat{O}=A/B$ is a biased estimator ? since $(E(A/B) - A/B)$ is different from zero ?
Indeed, I expect the expectation for my estimator $\hat{O}=A/B$ to be equal to : $E(A/B) = \dfrac{constant_1}{constant_2}$.
But we don't have to forget that first relation is from a Taylor first order exapnsion (Do all higher orders converge to zero ?).
Is this reasoning correct ?
UPDATE 2: Concerning the last comment of @angryavian below, I would like to clarify some points :
When I talk about a "true value", this is about the definition of an unbiased estimator, that is to say, with $a$ the "true value" of random variable or expectation of random variable $X$ (I don't know exactly if I have got to use it with random variable of expectation, sorry) :
$$E[X]-a\rightarrow 0$$
how can I justify that true value of my estimator $O$ is equal to $O=\dfrac{b_{s p}^{2}}{b_{p h}^{2}}$ ? Is it something that I know apriori ?
Or rather my tutor has built the estimator $\hat{O}$ from this new random variable $O$ where the true value of $O$ is equal to $\dfrac{b_{s p}^{2}}{b_{p h}^{2}}$ ?
Relationship between the choice for the definition of our estimator $\hat{O}$ and the random variable $O$ itself is pretty confusing for me : there too, I don't know if it is possible to guess it easily ?
- Is the approach of taking estimator and adding to it a noise (in the general sense of noise, I mean a noise disturbing the evaluation of estimator) a common method in Astrophysics or Physics in general ?
As you and others have noted, $E[A/B] \ne E[A] E[B]$ in general, so your estimator is biased.
Under certain conditions, it may not be a bad approximation. Your Taylor series expansion is a good approach to understanding this. Restating your expansion: $$ E\left[\frac{A}{B}\right] \approx \frac{E[A]}{E[B]} + \frac{E[A] \text{Var}(B)}{E[B]^3} - \frac{\text{Cov}(A,B)}{E[B]^2} = \frac{E[A]}{E[B]} \left( 1 - \frac{\text{Cov}(A,B)}{E[A]E[B]} + \frac{\text{Var}(B)}{E[B]^2}\right) \tag{$*$} $$
You seem to be assuming independence, so $\text{Cov}(A,B)=0$. Thus, if the variance of $B$ is small, then the bias of $A/B$ as an estimator of $E[A]/E[B]$ is not too large.
The missing higher-order terms will be the higher-order moments $E[(X-E[X])^k]$ and $E[(Y-E[Y])^k]$ which should also be small if $\text{Var}(X)$ is small (e.g., Cauchy-Schwarz), but this is a hand-wavy argument.
In short, your estimator is biased. But if the denominator $B$ has small variance, the bias may be small. For a more precise/quantitative statement, you'd have to be more specific about what "small variance" really means by reading the Taylor series expansion, and handling the missing higher-order terms with care and/or expanding the Taylor series farther, etc.
For my own sake, the derivation of the expansion ($*$):
In general, $$f(x,y) \approx f(x_0, y_0) + (x-x_0) \frac{\partial f}{\partial x} (x_0, y_0) + (y-y_0) \frac{\partial f}{\partial y} (x_0, y_0) + \frac{1}{2} (x-x_0)^2 \frac{\partial^2 f}{\partial x^2} (x_0, y_0) + \frac{1}{2} (y-y_0)^2 \frac{\partial^2 f}{\partial y^2} (x_0, y_0) + (x-x_0)(y-y_0) \frac{\partial^2 f}{\partial x \partial y} (x_0, y_0).$$ With $f(x,y) = x/y$, we have $$\frac{x}{y} \approx \frac{x_0}{y_0} + \frac{x-x_0}{y_0} - \frac{x_0(y-y_0)}{y_0^2} + \frac{x_0(y-y_0)^2}{y_0^3} - \frac{(x-x_0)(y-y_0)}{y_0^2}.$$ Applying this to random variables $A$ and $B$, and setting $x_0=E[A]$ and $y_0=E[B]$, we arrive at ($*$).