Let $d\in\mathbb N$, $\Omega\subseteq\mathbb R^d$ and $\rho:\Omega\to[0,\infty)$. I try to understand the approximation of $p$ described on p. (3) of this paper.
The author is doing the following: Let $$\varphi(x):=e^{-\frac12x^2}\;\;\;\text{for }x\in\mathbb R$$ (or maybe $\varphi(x):=\frac1{\sqrt{2\pi}}e^{-\frac12x^2}$; it's not totally clear from the paper), $$\sigma(x):=\rho(x)^{-\frac1d}\;\;\;\text{for }x\in\Omega,$$ $$\psi(x,y):=\frac1{\sigma(x)^d}\varphi\left(\frac{\|x-y\|}{\sigma(x)}\right)\;\;\;\text{for }x,y\in\Omega$$ and $$A(x,y):=\sum_{i=1}^n\psi(x_i,y)\;\;\;\text{for }x\in\Omega^n\text{ and }y\in\Omega.$$
The author is claiming that, given $x\in\Omega^n$, we can approximate $\rho$ by $A(x,\;\cdot\;)$.
He even shows an illustrative example for $d=1$:

Now simply assume that $\rho=1$. We easily see that Gaussian kernels have overlapping support. In the example above, we see that only for every $i\in\{1,\ldots,n\}$, only a single kernel does not vanish at $x_i$. So, shouldn't we somehow need to incorporate the distances between the $x_i$ in order to ensure that the supports are suitably?
Without that, $A(x,\;\cdot\;)$ is obviously not necessarily an approximation of $\rho=1$. Simply take $x_i=\frac in$ and $n=100$. Then, for example,
y = 0: A(x, y) = 33.2128
y = 0.01: A(x, y) = 33.3673
y = 0.02: A(x, y) = 33.5193
y = 0.03: A(x, y) = 33.6689
y = 0.04: A(x, y) = 33.8159
y = 0.05: A(x, y) = 33.9602
y = 0.06: A(x, y) = 34.102
y = 0.07: A(x, y) = 34.2411
y = 0.08: A(x, y) = 34.3775
y = 0.09: A(x, y) = 34.5111