Bias and variance of estimator

253 Views Asked by At

I have the following estimator, $E = 1/\bar{X}$ of $E = 1/\lambda$ where X is exponentially distributed with parameter $\lambda$. I'm trying to find the bias and variance of this estimator. For the bias I have calculated it as $E(1/\lambda - 1/\bar{X})$ so it should be asymptotically unbiased. For the variance of the estimator I get $E((1/\bar{X})^2 - (E(1/\bar{X}))^2)$ but I'm not confident witht his result. Could someone verify or provide a derivation for the correct results?

1

There are 1 best solutions below

3
On BEST ANSWER

Although your notation is nonstandard, I believe you are checking properties of the estimator $\hat \lambda = 1/\bar X$ of $\lambda$ for a random sample of size $n$ from $Exp(rate = \lambda).$ An unbiased version of the estimate is $\tilde \lambda = (n-1)/\sum X_i$.

For comparison with your analytical results, here are some simulated results (using R) for the case $n=10$ and $\lambda = 5.$ Based on a million samples of size ten, results should be reliable to 2 or 3 significant digits.

 m = 10^6;  lam = 5;  n = 10
 x = rexp(m*n, lam)
 DTA = matrix(x, nrow=m)      # m x n matrix, each row a sample
 a = rowMeans(DTA)            # vector of m sample means
 lam.hat = 1/a                # vector of m estimates
 mean(lam.hat);  sd(lam.hat)
 ## 5.556989                  # aprx E(est)
 ## 1.966194                  # aprx SD(est)
 sqrt(mean((lam.hat - lam)^2))
 ## 2.043564                  # aprx root mean sq error of est
 lam.unb = ((n-1)/n)*lam.hat  # unbiased est
 mean(lam.unb);  sd(lam.unb)
 ## 5.00129
 ## 1.769575
 sqrt(mean((lam.unb - lam)^2))
 ## 1.769574

I seem to recall that the usual method of finding moments of $1/\bar X$ is to recognize that $\bar X$ has a gamma distribution, and then to recognize that the integrand of a power of $1/\bar X$ times the PDF of $\bar X$ is similar to the PDF of a related distribution.


Addendum (prompted by Comment): If $X_1, X_2, \dots, X_n$ are a random sample from $Exp(rate = \lambda),$ then one can use moment generating functions to show that $T = \sum_{i=1}^n X_i \sim Gamma(n, \lambda),$ which has density function $f_T(t) = \frac{\lambda^n}{(n-1)!} t^{n-1} e^{-\lambda t},$ for $t > 0.$

Then $$E(1/T) = \int_0^\infty \frac{1}{t} f_T(t)\,dt = \int_0^\infty \frac{\lambda^n}{(n-1)!} t^{n-2} e^{-\lambda t}\,dt = \frac{\lambda}{n-1} \int_0^\infty g(t)\,dt,$$ where $g(t)$ is the density of $Gamma(n-1, \lambda)$, so that the last integral is unity and $E(1/T) = \lambda/(n-1).$ Hence $E\left(\frac{n-1}{T}\right) = \lambda$ and $\hat \lambda = 1/\bar X$ is biased. This is the method to which I referred in the last paragraph of my original Answer. Something similar works for evaluating $E(1/T^2).$

The simulation below illustrates that $T \sim Gamma(n, \lambda),$ again with $n = 10$ and $\lambda = 5.$

 m = 10^6;  lam = 5;  n = 10
 x = rexp(m*n, lam)
 DTA = matrix(x, nrow=m)      # m x n matrix, each row a sample
 t = rowSums(DTA)
 hist(t, prob=T,  col="wheat")
 curve(dgamma(x,n,lam), lwd=2, col="blue", add=T)

enter image description here