The article *“Neural-Gas” Network for Vector Quantization and its Application to Time-Series Prediction*$^1$ presents the function
\begin{equation} E_\lambda = \frac{1}{2 C(\lambda)} \sum_{i=1}^N \int\limits_{x \in X} p\left(x\right) h_\lambda\left(k_i\right) \left(x - w_i\right)^2 \, \mathrm{d}^D x \end{equation}
where $X \subseteq \mathbb{R}^D$ is a manifold, $p(x)$ is a probability density function, the values of $k_i$ are unimportant to the question, and
$$ h_\lambda\left(k\right) = e^{-k/\lambda} $$ $$ C\left(\lambda\right) = \sum_{i=1}^N h_\lambda\left(k\right).$$
On page 3 of the article it is stated that
For $\lambda \rightarrow \infty$, the cost function $E_\lambda$ becomes parabolic [...].
By the context, I assume that "with respect to the vectors $w_i$" is implied. Going to the limit I get
$$E_\lambda \rightarrow \frac{1}{2N} \sum_{i=1}^N \int\limits_{x \in X} p(x) \left(x - w_i\right)^2 \, \mathrm{d}^D x, \hspace{20pt}\lambda \rightarrow \infty.$$
How can you tell that this function is parabolic?
1: Martinetz et al., IEEE Transactions on Neural Networks, vol. 4, no. 4, 1993
In this case, it's not actually the integral function $E_\lambda$ that's said to be parabolic, it's its discrete counterpart $\hat{E}_\lambda$ for when the manifold $X$ is instead represented by a finite set $V$ of samples drawn from the probability governed by $p(\mathbf{x})$:
$$ \hat{E}_\lambda = \frac{1}{2C\left(\lambda\right)} \sum\limits_{i=1}^N \sum\limits_{\mathbf{v} \in V} h_\lambda\left(k_i\right) \left(\mathbf{v} - \mathbf{w}_i\right)^2. $$
Letting $\lambda \rightarrow \infty$ results in
$$ \hat{E}_\lambda \rightarrow \frac{1}{2N} \sum\limits_{i=1}^N \sum\limits_{\mathbf{v} \in V} \left(\mathbf{v} - \mathbf{w}_i\right)^2, \hspace{20pt} \lambda \rightarrow \infty$$
which is clearly a parabolic function with respect to the vectors $\mathbf{w}_i$.