If $\phi_\sigma$ is the pdf of $\mathcal N(0,\sigma^2)$ and $\psi$ is another pdf, does $(\phi_\sigma*\psi)/\phi_\sigma\to1$ for $\sigma\to\infty$?

118 Views Asked by At

Let $\phi_\sigma$ denote the probability density function of $\mathcal N(0,\sigma^2\cdot id)$, where $id$ is the identity matrix in $\Bbb R^{n\times n}$. If $X,Y$ are independent $\Bbb R^n$-valued random variables with $X_\sigma\sim\mathcal N(0,\sigma^2\cdot id)$, then $X_\sigma+Y$ has the density

$$ \Phi_\sigma(x)=\int_{\Bbb R^n} \phi_\sigma(x-y) P(Y\in dy), \quad x\in\Bbb R^n. $$

For the sake of simplicity, I'll suppose that $Y$ also has a density $\psi$, so $\Phi_\sigma$ is simply the convolution $\phi_\sigma*\psi$.

What I am interested in: Find conditions on the law of $Y$ under which we have

$$ \frac{\Phi_\sigma(x)}{\phi_\sigma(x)} \xrightarrow{\sigma\to\infty} 1, \quad x\in\Bbb R^n. \tag{1} $$

Any reference that treats this or similar questions is just as helpful as a hint or a solution. I'd expect that the law of $Y$ needs to have sufficiently nice tails (or just moments) for this to work...

What I have done so far: For any fixed $y\in\Bbb R^n$ and compact set $K\subset\Bbb R^n$, we have

\begin{align*} P(X_\sigma+y\in K)&=\int_{K-y}\phi_\sigma(x)dx=\int_K \phi_\sigma(x-y)dx\\ &=\int_K \phi_\sigma(x) \exp((1/2\sigma^2)(2x^\top y-|y|^2)) dx. \end{align*}

For $\sigma \to \infty$, the term $\exp((1/2\sigma^2)(2x^\top y-|y|^2))$ goes to 1 locally uniformly in $x$ and $y$, so we can infer that

$$ \frac{P(X_\sigma+y\in K)}{P(X_\sigma\in K)} \xrightarrow{\sigma\to\infty} 1 $$

locally uniformly in $y$. Here's where my argument becomes sloppy: For sufficiently large $N\in\Bbb N$ and $\sigma>0$ we should then have

\begin{align*} P(X_\sigma+Y\in K)&\approx \int_{[-N,N]} P(X_\sigma+y\in K)\psi(y)dy \\ &\approx \int_{[-N,N]} P(X_\sigma\in K)\psi(y)dy \\ &\approx P(X_\sigma\in K), \end{align*}

but here I just naively change the order of limits with respect to $N$ and $\sigma$. Even if we assume that this can be fixed and

$$ \frac{P(X_\sigma+Y\in K)}{P(X_\sigma\in K)} \xrightarrow{\sigma\to\infty} 1 $$

actually holds, is this enough to conclude that (1) is true?

2

There are 2 best solutions below

0
On BEST ANSWER

For any Borel probability measure $\mu$ on $\mathbb{R}^n$

$$\phi_\sigma*\mu(x)=\frac{1}{(2\pi\sigma^2)^{n/2}}\int_{\mathbb{R}^d}e^{-\tfrac{|x-y|^2}{2\sigma^2}}\mu(dy)=\frac{1}{(2\pi\sigma^2)^{n/2}}\int_{\mathbb{R}^d}e^{-\tfrac{|x|^2}{2\sigma^2}}e^{\frac{x\cdot y}{\sigma^2}}e^{-\tfrac{|y|^2}{2\sigma^2}}\mu(dy)$$ Hence $$\frac{\phi_\sigma *\mu(x)}{\phi_\sigma(x)}=\int_{\mathbb{R}^d}e^{\frac{x\cdot y}{\sigma^2}}e^{-\tfrac{|y|^2}{2\sigma^2}}\mu(dy)=\mu(\{0\})+\int_{\mathbb{R}^d\setminus\{0\}}e^{\frac{x\cdot y}{\sigma^2}}e^{-\tfrac{|y|^2}{2\sigma^2}}\mu(dy)$$

Using dominated convergence, one gets that

$$\frac{\phi_\sigma *\mu(x)}{\phi_\sigma(x)}\xrightarrow{\sigma\rightarrow\infty}\mu(\{0\})+\mu(\mathbb{R}^d\setminus\{0\})=1 $$ for every $x\in\mathbb{R}^n$. This holds in particular when $\mu(dx)=\psi(x)\,dx$, in which case, $\phi_\sigma*\mu=\phi_\sigma*\psi$.

Comment:

  1. The argument the OP outlined in the posting may be used to show the following result:

Let $X$ and $Y$ be independent random variables defined on a probability space $(\Omega,\mathscr{F},\mathbb{P})$. Suppose $X\sim N(0;1)$, and $Y$ has law $\nu$, then for any Borel set $A$ of finite positive Lebesgue measure, the conditional distribution of $\sigma X+Y|\sigma X\in A$ converges weakly as $\sigma\rightarrow\infty$ to the convolution between the uniform distribution over $A$ and the $\nu$.

Proof: Suppose $f\in\mathcal{C}_b(\mathbb{R}^n)$ (i.e. $f$ is a bounded continuous function). Then $$\begin{align} E[f(\sigma X+ Y)|\sigma X\in A]&=\frac{\int \mathbb{1}_A(\sigma x) \Big(\int f(\sigma x+y)\,\nu(dy)\Big)\phi_1(x)\,dx}{\int\mathbb{1}_A(\sigma x)\phi_1(x)\,dx}\\ &=\frac{\int \mathbb{1}_A(x) \Big(\int f( x+y)\,\nu(dy)\Big)\phi_1\big(\tfrac{x}{\sigma}\big)\,dx}{\int\mathbb{1}_A(x)\phi_1\big(\tfrac{x}{\sigma}\big)\,dx}\\ &\xrightarrow{\sigma\rightarrow\infty}\frac{\int\mathbb{1}_A(x)\Big(\int f(x+y)\,\nu(dy)\Big)\,dx}{\int\mathbb{1}_A(x)\,dx} \end{align}$$ This means that the conditional distribution of $\sigma X+Y|\sigma X\in A$ converges weakly to $\big(\frac{1}{\lambda_n(A)}\mathbb{1}_A\,d\lambda_n\big)*\nu$, that is the convolution between the uniform distribution on $A$ and $\nu$.

  1. The result outlined in comment (1) is not particular to the normal distribution. Notice that if the law of $X$ has density $g$ with respect to the Lebesgue measure, and $g$ is continuous at $0$ and $g(0)>0$, the same argument outlined in comment (1) shows that the the conclusion there also holds.
4
On

$Y$ having a density is sufficient for pointwise convergence to $1$. I don't know if this is necessary, or if stronger convergence notions also follow.


Let $\rho_\sigma(x) := \frac{\Phi_\sigma(x)}{\phi_\sigma(x)} = \int \psi(y) e^{(\|x\|^2 -\|y-x\|^2)/2\sigma^2} \mathrm{d}y.$ First observe that

$$ \sup_{y} e^{(\|x\|^2 - \|y-x\|^2)/2\sigma^2} = e^{\|x\|^2/2\sigma^2},$$

and thus $\rho_\sigma(x) \le e^{\|x\|^2/2\sigma^2}.$

Next, fix a $\varepsilon \in (0,1),$ and consider the set

$$ \mathcal{Y}^x_{\varepsilon,\sigma} := \{y : e^{(\|x\|^2 -\|y-x\|^2)/2\sigma^2} \ge 1-\varepsilon \}. $$

The key point is that this set contains the ball $B(x, \sigma u_\varepsilon),$ centred at $x$ and of radius $\sigma u_\varepsilon,$ for $u_\varepsilon = \sqrt{2\log(1/(1-\varepsilon))}.$ As a result,

$$ \rho_\sigma(x) \ge \int_{\mathcal{Y}_{\varepsilon, \sigma}^x} \psi(y) (1-\varepsilon) \ge (1-\varepsilon) P(Y \in B(x, \sigma u_\varepsilon)). $$

But notice that the ball in question monotonically grows to the whole space as $\sigma$ blows up. By continuity of measure, the probability in this lower bound goes to $1$ as $\sigma$ blows up. We thus have that for any $x, \varepsilon \in (0,1),$

$$ 1 \ge \lim_{\sigma \to \infty} \rho_\sigma(x) \ge 1-\varepsilon,$$

and the conclusion follows upon sending $\varepsilon \to 0.$