A Convergence Result Conjecture

220 Views Asked by At

When I was doing my own research, I am tempted to prove the following statement.

Suppose $\{a_n\}$ and $\{b_n\}$ are sequences of real numbers such that $\frac{1}{n}\sum_{i = 1}^n a_ib_i \to G \neq 0$. $f$ is a continuous and bounded function on $\mathbb{R}^1$. Fix $x_0 \in \mathbb{R}^1$, then $$\frac{1}{n}\sum_{i = 1}^n a_ib_i\int_0^1\left[f(x_0) - f\left(x_0 + sn^{-1/2}b_i\right)\right]ds \to 0 \tag{1}$$ as $n \to \infty$. PS: If necessary, one may also assume $\max_{1 \leq i \leq n}|b_i| = O(n^{1/4})$.

I tend to believe this is true and spent some time to prove it. However, the difficulty comes from when I tried to bound the left side of $(1)$ by triangle inequality, although the integral is controlled by arbitrarily small positive number, the absolute value was also imposed on $a_ib_i$, which makes it difficult to apply the non-absolute-value condition $\frac{1}{n}\sum_{i = 1}^n a_ib_i \to G$ (note we do not have any information about whether the summands are positive or negative.). Can someone give me a clear proof of $(1)$ if it is true? Or construct a counter example to overthrow it?


Edit: I am happy to see this question gets much attention. In fact, the background of this problem comes from some theoretical proof under quantile regression settings. The above conjecture is my own abstraction. The thing I feel confusing are the proofs from some publications. The missing details seems hard to fix. In the following, I will list the original statements from some papers:

For example, in the proof of Gutenbrunnner, Jureckova (1992), Lemma $1$, the author claims directly (I simplified the case to homoscedstic case so that $\sigma_{ni} \equiv 1$):

\begin{align*} \sup_{\|t\| \leq K, \varepsilon \leq \alpha \leq 1 - \varepsilon} & \left\|\frac{1}{n}\sum_{i = 1}^n x_{ni}x_{ni}'t\int_0^1\left[f\left(F^{-1}(\alpha) + n^{-1/2}x_{ni}'t\right) - f(F^{-1}(\alpha))\right]ds\right\| = o(1). \tag{2} \end{align*}

Under the assumptions:

  • $f$ is the continuous density of some distribution function $F$, which is positive and finite on $\{t: 0 < F(t) <1\}$.

  • $x_{ni}$ are rows of an $n \times p$ design matrix $X_n$, where $p$ is fixed and $n \to \infty$. The first column of $X_n$ consists of ones and the other columns are orthogonal to the first one.

  • $\|X_n\|_\infty = o(n^{1/2})$.

  • $Q_n = \frac{1}{n}X_n^TX_n \to Q$ where $Q$ is a positive definite $p \times p$ matrix.

I think $(2)$ and $(1)$ bear some resemblance, so if $(1)$ were not true, could $(2)$ be true? The problem $(2)$ may be even a little more challenging since for which we are actually dealing with the convergence of a sequence of matrices.

Another even more ambitious claim is Lemma A.2 of Koenker, Zhao (1996), which states (both the statement and proof have many confusing typos, here I presented the version I corrected):

If $\{g_t\}$ and $\{H_t\}$ are sequences of random $p$-vectors such that $E\|g_t\|^{2 + \delta} \leq S < \infty$, $E\|H_t\|^{2 + \delta} \leq S < \infty$ for some $\delta > 0$. $\{u_t\}$ is a sequence of i.i.d. random variables with continuous and bounded density $f$. $g_t$ and $H_t$ are independent of $(u_t, u_{t - 1}, \ldots)$ and $$n^{-1}\sum_{t = 1}^n g_tH_t' \to_P G$$ for a nonrandom, nonsingular matrix. Then, $$V(\Delta) = n^{-1/2}\sum_{t = 1}^n g_t\psi_\tau(u_t - F^{-1}(\tau) - n^{-1/2}H_t'\Delta)$$ satisfies $$\sup_{\|\Delta\| \leq M} \|V(\Delta) - V(0) + f(F^{-1}(\tau))G\Delta\| = o_P(1)$$ for fixed $M$, $0 < M < \infty$. Here $\psi_\tau(x) = \tau - I(x < 0)$, $\tau \in (0, 1)$, $I$ is indicator function.

The last step to complete the proof of this lemma turns out to be a similar claim as $(1)$ and $(2)$, that is

$$\sup_{\|\Delta\| \leq M}\left\|n^{-1/2}\sum_{1}^n g_t(F(F^{-1}(\tau)) - F(F^{-1}(\tau) + n^{-1/2}H_t'\Delta)) + f(F^{-1}(\tau))G\Delta\right\| = o_P(1). \tag{3}$$

$(3)$ holds if the following form like $(1)$ $$\sup_{\|\Delta\| \leq M}\left\|n^{-1}\sum_{1}^n g_tH_t'\Delta\int_0^1\left[f(F^{-1}(\tau)) - f(F^{-1}(\tau) + sn^{-1/2}H_t'\Delta)\right]ds\right\| = o_P(1) \tag{4}$$ holds. But to prove $(4)$, we probably encounter the same problem we must handle in proving $(1)$, so if $(1)$ were wrong, would $(4)$ be true?

Of course, it is also very welcome if someone can provide me with a direct proof of $(2)$ and $(3)$ (without linking them to $(1)$).

2

There are 2 best solutions below

2
On

The statement is false, even with a stricter restriction $|b_n|=O(n^{\alpha})$, where $\alpha$ is an arbitrary real number(possibly negative). The point is to:

Construct a sequence $\{a_i b_i\}$ with large terms and small sums (by alternating sgn).

I've been stuck with this for a while, but the construction by Olivier Oloa in the comments reminded me. We may just take $a_n b_n=n^{1-\epsilon}$ when $n$ is odd and $a_n b_n=1-(n-1)^{1-\epsilon}$ when $n$ is even. It's easy to check $\frac{1}{n}\sum_{i=1}^n a_i b_i=\frac{1}{2}+O(n^{-\epsilon})$.
Set $f(x)=(x-x_0)^{\beta}$, where $\beta$ is chosen to make $f$ well-defined on $\mathbb{R}$. Then $$\begin{align} & \frac{1}{n}\sum_{i=1}^n a_i b_i\int_0^1\left[f(x_0)-f(x_0+sn^{-1/2}b_i)\right]ds\\ = & -\frac{1}{n}\sum_{i=1}^n\frac{1}{1+\beta} a_i b_i^{1+\beta}n^{-\beta/2} \end{align}$$ Take $b_n=(-1)^n \cdot n^\alpha$. Then $a_n$ is defined since $a_nb_n$ is defined above. Now take $\beta=1/(2k+1),k\in\mathbb{N_+}$. We see $$a_n b_n^{1+\beta}=a_n b_n\cdot b_n^\beta=-n^{1-\epsilon}\cdot n^{\alpha\beta}+O(n^{\alpha\beta})$$ Note that the alternating sign was cancelled via multiplication by $b_n^\beta$.
We conclude that $$\begin{align} & \frac{1}{n}\sum_{i=1}^n a_i b_i\int_0^1\left[f(x_0)-f(x_0+sn^{-1/2}b_i)\right]ds\\ = & \left(\frac{1}{n(1+\beta)}\sum_{i=1}^n n^{1-\epsilon}\cdot n^{(\alpha-1/2)\beta}\right)+O(n^{(\alpha-1/2)\beta-1})\\ \geq & Cn^{1-\epsilon}\cdot n^{(\alpha-1/2)\beta}+O(n^{(\alpha-1/2)\beta-1}) \end{align}$$ Let $k\to\infty$, thus $\beta\to 0$. For $\epsilon$ sufficiently small the above expression tends to infinity.

0
On

(Edit: I devised most of this last night when OP did not have the restriction $\max_{1 \leq i \leq n}|b_i| = O(n^{1/4})$. I leave it here because it's too big for a comment and because it might (?) help someone with OP's good question.)

Here is a counter example.

Take $f(x)=-x\sin(1/x)$ if $x\ne 0$, $f(0)=0$. Take $x_0=0$, $a_i=1/\sqrt{k}$, $b_k=\sqrt{k}$. Then $G=1$. Then notice that this depends upon (letting $u=\sqrt{\frac{n}{k}}\frac{1}{s}$) $$-\frac{1}{n}\sum{\sqrt{\frac{n}{k}}}\int_{\sqrt{n/k}}^{\infty}u^{-3}\sin(u)\,du=-\sum{\sqrt{\frac{1}{nk}}}\int_{\sqrt{n/k}}^{\infty}u^{-3}\sin(u)\,du$$ Integrating by parts twice, it's clear that $$\frac{\sqrt{k/n} \sin \left(\sqrt{\frac{n}{k}}\right)}{2 n}+\frac{\cos \left(\sqrt{\frac{n}{k}}\right)}{2 n}\le-\sqrt{{\frac{1}{nk}}}\int_{\sqrt{n/k}}^{\infty}u^{-3}\sin(u)\,du$$

Then $$\frac{1}{2}\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}\left({\sqrt{\frac{k}{n}} \sin \left(\sqrt{\frac{n}{k}}\right)}+\cos \left(\sqrt{\frac{n}{k}}\right)\right)$$

is a Riemann Sum with value $$0\ne\frac{1}{2}\int_{0}^{1}x\sin(1/x)+\cos(1/x)\,dx\ldotp$$