Law of large numbers for functions of perturbed random variables

310 Views Asked by At

Let $\Omega$ be a random vector in $\mathbb{R}^k$. Suppose $f:\mathbb{R}^k \to \mathbb{R}$ is a continuous function and $\mathbb{E}\left[ \left\lvert f(\Omega) \right\rvert \right] < +\infty$. Let $\{\omega_i\}$ be an i.i.d. sample of $\Omega$, and suppose $\{\delta_i\}$ is a fixed sequence in $\mathbb{R}^k$ converging to zero. Under what conditions do we have (either almost surely, or in probability) $$\frac{1}{n} \sum_{i=1}^{n} f(\omega_i + \delta_i) \to \mathbb{E}\left[ f(\Omega) \right]?$$

I know that $\frac{1}{n} \sum_{i=1}^{n} f(\omega_i) \to \mathbb{E}\left[ f(\Omega) \right]$ in probability/almost surely by the weak/strong law of large numbers. I'm hoping that the result carries over without any additional assumptions on $\Omega$ and the sequence of perturbations $\{\delta_i\}$.

More generally, I'm looking for techniques that can be used to approach these kinds of questions. In particular, I'm interested in analyzing extensions where the sequence $\{\delta_i\}$ may itself be random (not necessarily i.i.d.).

Edit: I don't want to assume that the support of $\Omega$ is compact.

1

There are 1 best solutions below

1
On BEST ANSWER

Some observations on i.i.d. vectors $\{X_i\}_{i=1}^{\infty}$ through a function $f:\mathbb{R}^k\rightarrow\mathbb{R}$.

1) As in my above comments:

  • Lipschitz-like property: If there is a real-valued constant $L>0$ such that: $$ ||f(X_i+\delta_i)-f(X_i)|| \leq L||\delta_i||\quad \forall i \in \{1, 2, 3, ...\}$$ then the desired probability 1 convergence holds.

  • It fails for the non-Lipschitz function $f(x,y)=xy$. Consider i.i.d. $\{(Z_i,0)\}_{i=1}^{\infty}$ and $\delta_i=(0,1/i)$ for $i\in\{1, 2, 3, ...\}$. Then $f(Z_i,0) = 0$ for all $i$ but if we assume $P[Z_1 \geq n^2]=1/n$ then by Borel-Cantelli we see $\frac{1}{n}\sum_{i=1}^n Z_i/i$ does not converge since $Z_n/n\geq n$ infinitely often.

2) You can use a generalized LLN result such as this:

Theorem: If $\{Y_i\}_{i=1}^{\infty}$ are mutually independent with $E[Y_i]=0$ for all $i$ and: $$ \sum_{i=1}^{\infty} \frac{E[Y_i^2]}{i^2} < \infty \quad (Eq. 1)$$ then $$\lim_{n\rightarrow\infty} \frac{1}{n}\sum_{i=1}^n Y_i = 0 \quad \mbox{(with prob 1)} $$

Proof: [Y. S. Chow, “On a strong law of large numbers for martingales,” Annals of Mathematical Statistics, vol. 38, no. 2, article 610, 1967.]

So if $\{X_i+\delta_i\}_{i=1}^{\infty}$ are mutually independent you can define $Y_i=f(X_i+\delta_i)-E[f(X_i+\delta_i)]$, assume (Eq. 1) holds, and also assume $$ \lim_{n\rightarrow\infty} \frac{1}{n}\sum_{i=1}^nE[f(X_i+\delta_i)] = E[f(X_1)]$$