Does convergence in probability imply deterministic convergence for non-random sequences?

134 Views Asked by At

Suppose we know that an estimator $\hat{\theta}_n$, which is a function of a random sample $X = (X_1, \dots, X_n)$, converges in probabiltiy to some constant $\theta$, i.e., $$\forall \varepsilon > 0: \lim_{n \rightarrow \infty} P( | \hat{\theta}_n - \theta | > \varepsilon ) = 0$$

Now, we perform bootstrapping, i.e., resampling with replacement from $X$. Thereby, we consider the data as given, and thus, $\hat{\theta}_n$ becomes a non-random sequence conditional on $X$.

Can we show that $\hat{\theta}_n$ given $X$ converges to $\theta$ in a deterministic sense? That is, $$\lim_{n \rightarrow \infty} \hat{\theta}_n = \theta$$

The reason for asking this question is that I want to use some properties of $\theta$ in a proof about a bootstrapped test statistic. Any help is appreciated.

1

There are 1 best solutions below

16
On

I don't think you can show this in general. Bootstrapping simply replaces the true distribution $F$ with the empirical distribution $F_n$, which is a discrete distribution.

In this case, we have $q = \hat \theta_n(X)$ is taken to be the true underlying value of $\theta$ in a bootstrap. If we take larger and larger bootstrap samples $B_i$ from $F_n$ we'd expect the estimator $\hat \theta_n(B_i)$ to converge in probability to $q$.

What you are looking for is that the sequence $\hat \theta(B_i)$ converges to $q$ almost surely. I think you can prove this in some cases, but it is not implied by the convergence in probability of the estimator itself.