Let $\phi_\sigma$ denote the probability density function of $\mathcal N(0,\sigma^2\cdot id)$, where $id$ is the identity matrix in $\Bbb R^{n\times n}$. If $X,Y$ are independent $\Bbb R^n$-valued random variables with $X_\sigma\sim\mathcal N(0,\sigma^2\cdot id)$, then $X_\sigma+Y$ has the density
$$ \Phi_\sigma(x)=\int_{\Bbb R^n} \phi_\sigma(x-y) P(Y\in dy), \quad x\in\Bbb R^n. $$
For the sake of simplicity, I'll suppose that $Y$ also has a density $\psi$, so $\Phi_\sigma$ is simply the convolution $\phi_\sigma*\psi$.
What I am interested in: Find conditions on the law of $Y$ under which we have
$$ \frac{\Phi_\sigma(x)}{\phi_\sigma(x)} \xrightarrow{\sigma\to\infty} 1, \quad x\in\Bbb R^n. \tag{1} $$
Any reference that treats this or similar questions is just as helpful as a hint or a solution. I'd expect that the law of $Y$ needs to have sufficiently nice tails (or just moments) for this to work...
What I have done so far: For any fixed $y\in\Bbb R^n$ and compact set $K\subset\Bbb R^n$, we have
\begin{align*} P(X_\sigma+y\in K)&=\int_{K-y}\phi_\sigma(x)dx=\int_K \phi_\sigma(x-y)dx\\ &=\int_K \phi_\sigma(x) \exp((1/2\sigma^2)(2x^\top y-|y|^2)) dx. \end{align*}
For $\sigma \to \infty$, the term $\exp((1/2\sigma^2)(2x^\top y-|y|^2))$ goes to 1 locally uniformly in $x$ and $y$, so we can infer that
$$ \frac{P(X_\sigma+y\in K)}{P(X_\sigma\in K)} \xrightarrow{\sigma\to\infty} 1 $$
locally uniformly in $y$. Here's where my argument becomes sloppy: For sufficiently large $N\in\Bbb N$ and $\sigma>0$ we should then have
\begin{align*} P(X_\sigma+Y\in K)&\approx \int_{[-N,N]} P(X_\sigma+y\in K)\psi(y)dy \\ &\approx \int_{[-N,N]} P(X_\sigma\in K)\psi(y)dy \\ &\approx P(X_\sigma\in K), \end{align*}
but here I just naively change the order of limits with respect to $N$ and $\sigma$. Even if we assume that this can be fixed and
$$ \frac{P(X_\sigma+Y\in K)}{P(X_\sigma\in K)} \xrightarrow{\sigma\to\infty} 1 $$
actually holds, is this enough to conclude that (1) is true?
For any Borel probability measure $\mu$ on $\mathbb{R}^n$
$$\phi_\sigma*\mu(x)=\frac{1}{(2\pi\sigma^2)^{n/2}}\int_{\mathbb{R}^d}e^{-\tfrac{|x-y|^2}{2\sigma^2}}\mu(dy)=\frac{1}{(2\pi\sigma^2)^{n/2}}\int_{\mathbb{R}^d}e^{-\tfrac{|x|^2}{2\sigma^2}}e^{\frac{x\cdot y}{\sigma^2}}e^{-\tfrac{|y|^2}{2\sigma^2}}\mu(dy)$$ Hence $$\frac{\phi_\sigma *\mu(x)}{\phi_\sigma(x)}=\int_{\mathbb{R}^d}e^{\frac{x\cdot y}{\sigma^2}}e^{-\tfrac{|y|^2}{2\sigma^2}}\mu(dy)=\mu(\{0\})+\int_{\mathbb{R}^d\setminus\{0\}}e^{\frac{x\cdot y}{\sigma^2}}e^{-\tfrac{|y|^2}{2\sigma^2}}\mu(dy)$$
Using dominated convergence, one gets that
$$\frac{\phi_\sigma *\mu(x)}{\phi_\sigma(x)}\xrightarrow{\sigma\rightarrow\infty}\mu(\{0\})+\mu(\mathbb{R}^d\setminus\{0\})=1 $$ for every $x\in\mathbb{R}^n$. This holds in particular when $\mu(dx)=\psi(x)\,dx$, in which case, $\phi_\sigma*\mu=\phi_\sigma*\psi$.
Comment:
Proof: Suppose $f\in\mathcal{C}_b(\mathbb{R}^n)$ (i.e. $f$ is a bounded continuous function). Then $$\begin{align} E[f(\sigma X+ Y)|\sigma X\in A]&=\frac{\int \mathbb{1}_A(\sigma x) \Big(\int f(\sigma x+y)\,\nu(dy)\Big)\phi_1(x)\,dx}{\int\mathbb{1}_A(\sigma x)\phi_1(x)\,dx}\\ &=\frac{\int \mathbb{1}_A(x) \Big(\int f( x+y)\,\nu(dy)\Big)\phi_1\big(\tfrac{x}{\sigma}\big)\,dx}{\int\mathbb{1}_A(x)\phi_1\big(\tfrac{x}{\sigma}\big)\,dx}\\ &\xrightarrow{\sigma\rightarrow\infty}\frac{\int\mathbb{1}_A(x)\Big(\int f(x+y)\,\nu(dy)\Big)\,dx}{\int\mathbb{1}_A(x)\,dx} \end{align}$$ This means that the conditional distribution of $\sigma X+Y|\sigma X\in A$ converges weakly to $\big(\frac{1}{\lambda_n(A)}\mathbb{1}_A\,d\lambda_n\big)*\nu$, that is the convolution between the uniform distribution on $A$ and $\nu$.