Laplace's method: multivariate case with error term

297 Views Asked by At

The $d$-dimensional case of Laplace's method is usually given as something like

$$ \int_D h(\mathbf{x})e^{-Mf(\mathbf{x})}d\mathbf{x} \sim \Big(\frac{2\pi}{M}\Big)^{d/2} \frac{h(\mathbf{x_0})e^{-Mf(\mathbf{x_0})}}{|Hess_f(\mathbf{x_0})|^{1/2}} \quad \text{ as }M\to\infty. $$ Here $f$ and $h$ must be sufficiently smooth, and $f$ must attain the unique minimum $\mathbf{x_0}$ in the interior of $D$.

Ronald W. Butler's "Saddlepoint Approximations with Applications" gives a similar result (formula 3.28), which also includes the relative error term $O(M^{-1}):$

$$ \int_D e^{-Mf(\mathbf{x})}d\mathbf{x} = \Big(\frac{2\pi}{M}\Big)^{d/2} \frac{e^{-Mf(\mathbf{x_0})}}{|Hess_f(\mathbf{x_0})|^{1/2}}\big(1+O(M^{-1}) \big) \quad \text{ as }M\to\infty. $$ However, this version doesn't include the additional factor $h(\mathbf{x})>0$.

My question is: is there a book or paper that proves or just quotes the multivariate Laplace approximation result, while also including the remainder term and the additional $h(\mathbf{x})$ factor? I.e. is the following true, under some assumptions: $$ \int_D h(\mathbf{x})e^{-Mf(\mathbf{x})}d\mathbf{x} = \Big(\frac{2\pi}{M}\Big)^{d/2} \frac{h(\mathbf{x_0})e^{-Mf(\mathbf{x_0})}}{|Hess_f(\mathbf{x_0})|^{1/2}}\big(1+O(M^{-1}) \big) \quad \text{ as }M\to\infty. $$

My motivation is that this paper seems to use the above (very strong) form of Laplace's method, but I'm having trouble justifying it.

2

There are 2 best solutions below

2
On

The Morse Lemma method is used here, and it gives the type of formula you seek.

Asymptotic Approximation of Integrals

Chapter 9, p. 483 Theorem 2

Author: Wong, Roderick

Publisher: Academic Press

ISBN: 0-12-762535-6, 978-0-12-762535-5

Date: 05/10/2014

0
On

In Simon, B., Advanced Complex Analysis, Part B., AMS, 2015, pp. 174-175 Barry states and proves the following version of Laplace's method:

Theorem: Let $f,g$ be real valued functions in $\mathbb{R}^d$ so that

  1. $g_-:=\inf_{x\in\mathbb{R}^d} g(x)>-\infty$,
  2. $f\geq0$,
  3. there is a unique $x_0\in\mathbb{R}^d$ such that $g(x_0)=g_-$,
  4. for some $R>0$, $g_R:=\inf_{|x|\geq R}g(x)>g_-$,
  5. for some $\alpha$, $\int e^{-\alpha g(x)}f(x)\,dx<\infty$,
  6. $A=D^2g(x_0)$ is strictly positive definite,
  7. $f(x_0)\neq0$,
  8. $f,g$ are $C^\infty$ in a neighborhood of $x_0$.

Then, if $Q(s):=\int e^{-sg(x)} f(x)\,dx$ $$s^{d/2} e^{sg_-} Q(s)\sim (2\pi)^{d/2} f(x_0) \operatorname{det}(A)^{-1/2}\Big(1+\sum^\infty_{j=1}a_js^{-j}\Big)$$ as $s\rightarrow\infty$.

The coefficients $a_j$ have explicit formulas in terms of derivative of $f$ and $g$ at $x_0$ (but quite complicated, idem). For $n=1$ $a_1=-\frac18\frac{g^{(4)}(x_0)}{(g^{(2)}(x_0))^2}+\frac{5}{24}\frac{(g^{(3)}(x_0))^2}{(g^{(2)}(x_0))^3}$

The proof given here is based on Morse's lemma.