It is known that:
$$\int_{-\infty}^\infty f(x) \, \delta(g(x)) \, dx = \sum_{i}\frac{f(x_i)}{|g'(x_i)|}$$
Where $x_i$ are the roots of $g(x)$.
My question is, what happens when $g'(x_i)$ is zero, but $f(x_i)$ is not? It seems that the integral on the left shoul exist irrespective of the value of $g'(x)$, so:
Is there a different formula for the integral one should use in this case, or conversely, is this indeed an indicator that the integral diverges?
Bounty Edit
As user88595 has explained this may be a consequence of $g(x)$ not having a simple root. I'm looking for a proof or a counterexample that for any $g(x)$ which does not have a simple root at $x_i$, the integral diverges.
Edit:
I thought I'd give a concrete example. Let's first look at: $$\int_{-\pi/2}^{\pi/2}e^x \delta\left(\sin(x)\right)dx$$ Since $\sin x$ only has one zero in the interval, and since the derivative at zero is one, the integral is equal to $e^0=1$. Now look at: $$\int_{-\pi/2}^{\pi/2}e^x \delta\left(\sin^3(x)\right)dx$$ Which leads to infinity by the above rule (since $g'(x) = 0$), and indeed diverges. Is this the case in general?
The statement is true.
Given $g(x)$ everywhere differentiable and $f(x)$ continuous real to real functions defined for all $x$ real numbers. If $f(x_i)\neq 0$, $g'(x_i)=0$ for any $x_i$ that satisfies $g(x_i)=0$ then $\int_{-\infty}^{\infty} f(x) \delta(g(x)) dx $ diverges. The converse is true in a restricted sense: if $f(x_i)\neq 0$, $g'(x_i)\neq 0$ for all $x_i$ that satisfies $g(x_i)=0$ and if there are only finitely many such $x_i$ points then $\int_{-\infty}^{\infty} f(x) \delta(g(x)) dx $ is finite.
The support of the delta function is the infinitesimal neighborhood around $0$, thus we can restrict the integral to the open sets around $x_i$, label them $U_i$.
Consider first the case where $g(.)$ is monotonic around $x_i$. Then there exists an inverse function of $g$ for which $G_i(g(x)) = x$ for $x\in U_i$. Furthermore the image of the open set by a continuous function is an open set, label it $V_i = g[U_i]$.
Then you can write $$\int_{-\infty}^{\infty} f(x) \delta(g(x)) dx =\sum_i \int_{U_i} f(x) \delta(g(x)) dx =\sum_i \int_{V_i} f(G_i(g)) \delta(g) \frac{dG_i}{dg} dg\,.$$ Let us denote $G'_i = \frac{dG_i}{dg}$. From the definition of the delta function $$\sum_i \int_{V_i} f(G_i(g)) G'_i(g) \delta(g) dg = \sum_i f(G_i(0)) G'_i(0)$$.
Thus if $f(x_i) = f(G_i(0)) \neq 0$ then this is finite iff $G'_i(0)$ is finite. However $G'_i(0) = 1/g'(x_i)$ from the inverse function theorem, therefore the expression is finite exactly if $g'(x_i)\neq 0$.
The case where $g(.)$ has a local maximum or minimum at some $x_i$ is a bit more subtle. In this case, there is no inverse, and the previous argument is not applicable. Let us consider the case where $g(.)$ is at least $n$ times differentiable, such that if it has a minimum, $g(x) \rightarrow a x^{2n}$ in an infinitesimal neighborhood $U_i$, where $a> 0$ real and $n\geq 1$ integer. The delta function can be represented formally as the limit of the series $\phi(x) = (2\pi \sigma^2)^{-1/2}\exp(-x^2/2\sigma^2)$ where $\sigma\rightarrow 0$. Then
$$\int_{U_i} f(x) \delta(g(x)) dx = \lim_{\sigma\rightarrow 0}\int_{U_i-x_i}\frac{1}{\sqrt{2\pi \sigma^2}} f(x_i + z)\exp(-z^{4n}/2\sigma^2) dz\,. $$
Since $f(.)$ is continuous you can always find a $c$ in a sufficiently small neighborhood that $f(x_i)+c>f(x)> f(x_i)-c$. Thus we can give a upper/lower bound on the integral by replacing $f(x_i+z)$ with $f(x_i)\pm c$, which you can take out in front of the integral. Now you can show that $$\frac{1}{\sqrt{2\pi \sigma^2}} \int_{-\infty}^{\infty} \exp(-x^{2n}/2\sigma^{2}) dx =\sigma^{(1-n)/n}\times\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} \exp(-y^{2n}/2) dy $$ The second term is finite in the limit $\sigma \rightarrow 0$, so this is asymptotically proportional to $\sigma^{(1-n)/n}$. In the limit $\sigma\rightarrow 0$, this goes to infinity. Since, the lower bound of the integral goes to infinity, the integral is divergent.