Stirling's formula: proof?

22.8k Views Asked by At

Suppose we want to show that $$ n! \sim \sqrt{2 \pi} n^{n+(1/2)}e^{-n}$$

Instead we could show that $$\lim_{n \to \infty} \frac{n!}{n^{n+(1/2)}e^{-n}} = C$$ where $C$ is a constant. Maybe $C = \sqrt{2 \pi}$.

What is a good way of doing this? Could we use L'Hopital's Rule? Or maybe take the log of both sides (e.g., compute the limit of the log of the quantity)? So for example do the following $$\lim_{n \to \infty} \log \left[\frac{n!}{n^{n+(1/2)}e^{-n}} \right] = \log C$$

8

There are 8 best solutions below

23
On

The way I've usually shown Stirling's Asymptotic Approximation starts with the formula $$ \begin{align} n! &=\int_0^\infty x^ne^{-x}\;\mathrm{d}x\\ &=\int_0^\infty e^{-x+n\log(x)}\;\mathrm{d}x\\ &=\int_0^\infty e^{-nx+n\log(nx)}n\;\mathrm{d}x\\ &=n^{n+1}\int_0^\infty e^{-nx+n\log(x)}\;\mathrm{d}x\\ &=n^{n+1}e^{-n}\int_{-1}^\infty e^{-nx+n\log(1+x)}\;\mathrm{d}x\\ &=n^{n+1}e^{-n}\int_{-\infty}^\infty e^{-nu^2/2}x'\;\mathrm{d}u\tag{1} \end{align} $$ Where $u^2/2=x-\log(1+x)$. That is, $u(x)=x\sqrt{\frac{x-\log(1+x)}{x^2/2}}$, which is analytic near $x=0$: the singularity of $\frac{x-\log(1+x)}{x^2/2}$ at $x=0$ is removable, and since $\frac{x-\log(1+x)}{x^2/2}$ is near $1$ when $x$ is near $0$, $\sqrt{\frac{x-\log(1+x)}{x^2/2}}$ is analytic near $x=0$. Since $u(0)=0$ and $u'(0)=1$, the Lagrange Inversion Theorem says that $x$ is an analytic function of $u$ near $u=0$, with $x(0)=0$ and $x'(0)=1$.

Note that since $u^2/2=x-\log(1+x)$, we have $$ u(1+x)=xx'\tag{2} $$ from which it follows that $\lim\limits_{u\to+\infty}\dfrac{x'(u)}{u}=1$ and $\lim\limits_{u\to-\infty}\dfrac{x'(u)}{ue^{-u^2/2}}=-1$. That is, $x'(u)=O(u)$ for $|u|$ near $\infty$.

Because $x$ is an analytic function with $x(0)=0$ and $x'(0)=1$, $$ x=\sum_{k=1}^\infty a_ku^k\tag{3} $$ where $a_1=1$. Then, looking at the coefficient of $u^n$ in $(2)$, we get $$ \begin{align} a_{n-1} &=\sum_{k=1}^na_{n-k+1}ka_k\\ &=\sum_{k=1}^na_k(n-k+1)a_{n-k+1}\\ &=\frac{n+1}{2}\sum_{k=1}^na_ka_{n-k+1}\tag{4} \end{align} $$ Therefore, we get the recursion $$ a_n=\frac{a_{n-1}}{n+1}-\frac{1}{2}\sum_{k=2}^{n-1}a_ka_{n-k+1}\tag{5} $$ Thus, we get the beginning of the power series for $x(u)$ to be $$ x=u+\tfrac{1}{3}u^2+\tfrac{1}{36}u^3-\tfrac{1}{270}u^4+\tfrac{1}{4320}u^5+\tfrac{1}{17010}u^6-\tfrac{139}{5443200}u^7+\tfrac{1}{204120}u^8+\dots\tag{6} $$ Integration by parts yields the following two identities: $$ \int_{-\infty}^\infty e^{-nu^2}|u|^{2k+1}\;\mathrm{d}u=\frac{k!}{n^{k+1}}\tag{7a} $$ and $$ \int_{-\infty}^\infty e^{-nu^2}u^{2k}\;\mathrm{d}u=\frac{(2k-1)!!}{2^kn^{k+1/2}}\sqrt{\pi}\tag{7b} $$ Furthermore, we have the following tail estimates: $$ \begin{align} \int_{|u|>a}e^{-nu^2}\;\mathrm{d}u &=\int_a^\infty e^{-nu^2}u^{-1}\;\mathrm{d}u^2\\ &=\int_{a^2}^\infty e^{-nu}u^{-1/2}\;\mathrm{d}u\\ &\le\frac{1}{a}\int_{a^2}^\infty e^{-nu}\;\mathrm{d}u\\ &=\frac{1}{na}\int_{na^2}^\infty e^{-u}\;\mathrm{d}u\\ &=\frac{1}{na}e^{-na^2}\tag{8a} \end{align} $$ and for $k\ge1$, $$ \begin{align} \int_{|u|>a}e^{-nu^2}|u|^{k}\;\mathrm{d}u &=\int_a^\infty e^{-nu^2}u^{k-1}\;\mathrm{d}u^2\\ &=\int_{a^2}^\infty e^{-nu}u^{(k-1)/2}\;\mathrm{d}u\\ &=n^{-(k+1)/2}\int_{na^2}^\infty e^{-u}u^{(k-1)/2}\;\mathrm{d}u\\ &=n^{-(k+1)/2}\int_{na^2}^\infty e^{-u/2}e^{-u/2}u^{(k-1)/2}\;\mathrm{d}u\\ &\le n^{-(k+1)/2}\int_{na^2}^\infty e^{-u/2}\left(\frac{k-1}{e}\right)^{(k-1)/2}\;\mathrm{d}u\\ &=\frac{2}{n}\left(\frac{k-1}{ne}\right)^{(k-1)/2}e^{-na^2/2}\tag{8b} \end{align} $$ The tail estimates in $(8)$ are $O\left(\frac{1}{n}e^{-na^2/2}\right)$ for all $k\ge0$. That is, they decay faster than any power of $n$.

Define $$ f_k(u)=x'(u)-a_1-2a_2u-\dots-2ka_{2k}u^{2k-1}\tag{9} $$ Since $x(u)$ is analytic near $u=0$, $f_k(u)=O\left(u^{2k}\right)$ near $u=0$. Since $x'(u)=O(u)$ for $|u|$ near $\infty$, $f_k(u)=O\left(u^{2k-1}\right)$ for $|u|$ near $\infty$. Therefore, $$ \begin{align} \int_{-\infty}^\infty e^{-nu^2/2}f_k(u)\;\mathrm{d}u &=\int_{|u|< a}e^{-nu^2/2}f_k(u)\;\mathrm{d}u + \int_{|u|> a}e^{-nu^2/2}f_k(u)\;\mathrm{d}u\\ &\le\int_{|u|< a}e^{-nu^2/2}C_1u^{2k}\;\mathrm{d}u + \int_{|u|> a}e^{-nu^2/2}C_2|u|^{2k-1}\;\mathrm{d}u\\ &=O\left(\frac{1}{n^{k+1/2}}\right)\tag{10} \end{align} $$ Therefore, using $\mathrm{(7b)}$ and $(10)$, we get $$ \begin{align} n! &=n^{n+1}e^{-n}\int_{-\infty}^\infty e^{-nu^2/2}\left(1+\tfrac{1}{12}u^2+\tfrac{1}{864}u^4-\tfrac{139}{777600}u^6+f_4(u)\right)\;\mathrm{d}u\\ &+\;n^{n+1}e^{-n}\int_{-\infty}^\infty e^{-nu^2/2}\left(\tfrac{2}{3}u-\tfrac{2}{135}u^3+\tfrac{1}{2835}u^5+\tfrac{1}{25515}u^7\right)\;\mathrm{d}u\\ &=\sqrt{2n}\;n^ne^{-n}\left(\int_{-\infty}^\infty e^{-u^2}\left(1+\tfrac{1}{6n}u^2+\tfrac{1}{216n^2}u^4-\tfrac{139}{97200n^3}u^6\right)\;\mathrm{d}u+O\left(\frac{1}{n^4}\right)\right)\\ &=\sqrt{2\pi n}\;n^ne^{-n}\left(1+\frac{1}{12n}+\frac{1}{288n^2}-\frac{139}{51840n^3}+O\left(\frac{1}{n^4}\right)\right)\tag{11} \end{align} $$

0
On

Inspired by the two references below, here's a short proof stripped of motivation and details.

For $t>0$, define

$$ g_t(y) = \begin{cases} \displaystyle \left(1+\frac{y}{\sqrt{t}}\right)^{\!t} \,e^{-y\sqrt{t}} & \text{if } y>-\sqrt{t}, \\ 0 & \text{otherwise}. \end{cases} $$

It is not hard to show that $t\mapsto g_t(y)$ is decreasing for $y\geq 0$ and increasing for $y\leq 0$. The limit is $g_\infty(y)=\exp(-y^2/2)$, so monotone convergence gives

$$ \lim_{t\to\infty}\int_{-\infty}^\infty g_t(y)\,dy = \int_{-\infty}^\infty \exp(-y^2/2)\,dy = \sqrt{2\pi}. $$

Use the change of variables $x=y\sqrt{t}+t$ to get

$$ t! = \int_0^\infty x^t e^{-x}\,dx = \left(\frac{t}{e}\right)^t \sqrt{t} \int_{-\infty}^\infty g_t(y)\,dy. $$

References:

[1] J.M. Patin, A very short proof of Stirling's formula, American Mathematical Monthly 96 (1989), 41-42.

[2] Reinhard Michel, The $(n+1)$th proof of Stirling's formula, American Mathematical Monthly 115 (2008), 844-845.

0
On

Depending on one's preferences, one might care that it is possible to understand Stirling's formula (perhaps really due to Laplace?) in a usefully more general context, namely, as an example of a "Laplace's method" or "stationary phase" sort of treatment of asymptotics of integrals.

This is available on-line on my page https://www-users.cse.umn.edu/~garrett/m/v/, with the file being https://www-users.cse.umn.edu/~garrett/m/v/basic_asymptotics.pdf

One might object to certain very-classical treatments which make Gamma appear as a singular thing. While I agree that it is of singular importance in applications within and without mathematics, the means of establishing its asymptotics are not.

0
On

Instead of a proof, how about a string of hints? This comes from Maxwell Rosenlicht's Introduction to Analysis (a great little easy-to-read text which is dirt cheap -- it's a Dover paperback).

  • Chapter VI: Problem #22 Show that for $n=1,2,3,\dots$ we have that $$ 1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n}-\ln(n)$$ is positive and it decreases as $n$ increases. Thus this converges to a number between 0 and 1 (Euler's constant).

  • Chapter VII: Problem #39 For $n=0,1,2,\dots$ let $I_n=\int_0^{\pi/2} \sin^n(x)\,dx$. Show that

(a) $\frac{d}{dx}\left(\cos(x)\sin^{n-1}(x)\right) = (n-1)\sin^{n-2}(x)-n\sin^n(x)$

(b) $I_n = \frac{n-1}{n}I_{n-2}$ if $n \geq 2$

(c) $I_{2n} = \frac{1\cdot 3\cdot 5\cdots (2n-1)}{2\cdot 4\cdot 6\cdots (2n)}\frac{\pi}{2}$ and $I_{2n+1} = \frac{2\cdot 4\cdot 6\cdots (2n)}{3\cdot 5\cdot 7\cdots (2n+1)}$ for $n=1,2,3,\dots$

(d) $I_0, I_1, I_2, \dots$ is a decreasing sequence having the limit zero and $$ \lim_{n \to \infty} \frac{I_{2n+1}}{I_{2n}}=1$$

(e) Wallis' product: $$\lim_{n \to \infty} \frac{2\cdot 2\cdot 4\cdot 4\cdot 6\cdot 6 \cdots (2n)\cdot (2n)}{1\cdot 3\cdot 3\cdot 5\cdot 5\cdot 7 \cdots (2n-1)\cdot (2n+1)}=\frac{\pi}{2}$$

  • Chapter VII: Problem #40

(a) Show that if $f:\{ x\in \mathbb{R}\;|\; x\geq 1\} \to \mathbb{R}$ is continuous, then $$ \sum\limits_{i=1}^n f(i) = \int_1^{n+1} f(x)\,dx + \sum\limits_{i=1}^n\left(f(i)-\int_i^{i+1} f(x)\,dx\right)$$

(b) Show that if $i>1$, then $\ln(i)-\int_i^{i+1}\,dx$ differs from $-1/2i$ by less than $1/6i^2$. [Hint: Work out the integral using the Taylor series for $\ln(1+x)$ at the point $0$.]

(c) Use part (a) with $f=\ln$, part (b), and Problem #22 from Chapter VI to prove that the following limit exists: $$\lim_{n \to \infty} \left(\ln(n!)-\left(n+\frac{1}{2}\right)\ln(n)+n\right)$$

(d) Use part (e) of Problem #39 to compute the above limit, then obtaining: $$ \lim_{n\to \infty} \frac{n!}{n^ne^{-n}\sqrt{2\pi n}}=1$$ (i.e. Stirling's Formula)

2
On

If you're familiar with

$$\mathop {\lim }\limits_{n \to \infty } \frac{{\left( {2n} \right)!!}}{{\left( {2n - 1} \right)!!}}\frac{1}{{\sqrt n }} = \sqrt \pi $$

Then you can use

$$\eqalign{ & \mathop {\lim }\limits_{n \to \infty } \frac{{\left( {2n} \right)!{!^2}}}{{\left( {2n} \right)!}}\frac{1}{{\sqrt n }} = \sqrt \pi \cr & \mathop {\lim }\limits_{n \to \infty } \frac{{{2^{2n}}{{\left( {n!} \right)}^2}}}{{\left( {2n} \right)!}}\frac{1}{{\sqrt n }} = \sqrt \pi \cr} $$

Now you can check that

$$\alpha = \mathop {\lim }\limits_{n \to \infty } \frac{{n!{e^n}}}{{{n^n}\sqrt n }} = \mathop {\lim }\limits_{n \to \infty } \frac{{\left( {2n} \right)!{e^{2n}}}}{{{{\left( {2n} \right)}^{2n}}\sqrt {2n} }}$$

exists. Then square the first expression and divide by the latter to get

$$\alpha = \mathop {\lim }\limits_{n \to \infty } \frac{{{{\left( {n!} \right)}^2}{e^{2n}}}}{{{n^{2n}}n}}\frac{{{{\left( {2n} \right)}^{2n}}\sqrt {2n} }}{{\left( {2n} \right)!{e^{2n}}}} = \mathop {\lim }\limits_{n \to \infty } \frac{{{{\left( {n!} \right)}^2}{2^{2n}}\sqrt 2 }}{{\left( {2n} \right)!\sqrt n }} = \sqrt {2\pi } $$

Thus you have that

$$\mathop {\lim }\limits_{n \to \infty } \frac{{n!{e^n}}}{{{n^n}\sqrt {2n} }} = \sqrt \pi $$

or

$$n! \sim {n^n}{e^{ - n}}\sqrt {2\pi n} $$

2
On

Here are a couple of "proofs" of Stirling's formula. They are quite elegant (in my opinion), but not rigorous. On could write down a real proof from these, but as they rely on some hidden machinery, the result would be quite heavy.

1) A probabilistic non-proof

We start from the expression $e^{-n} n^n/n!$, of which we want to find an equivalent. Let us fix $n$, and let $Y$ be a random variable with a Poisson distribution of parameter $n$. By definition, for any integer $k$, we have $\mathbb{P} (Y=k) = e^{-n} n^k/k!$. If we take $k=n$, we get $\mathbb{P} (Y=n) = e^{-n} n^n/n!$. The sum of $n$ independent random variables with a Poisson distribution of parameter $1$ has a Poisson distribution of parameter $n$; so let us take a sequence $(X_k)$ of i.i.d. random variables with a Poisson distribution of parameter $1$. Note that $\mathbb{E} (X_0) = 1$. We have:

$$\mathbb{P} \left( \sum_{k=0}^{n-1} (X_k - \mathbb{E} (X_k)) = 0 \right) = \frac{e^{-n} n^n}{n!}.$$

In other words, $e^{-n} n^n/n!$ is the probability that a centered random walk with Poissonnian steps of parameter $1$ is in $0$ at time $n$. We have tools to estimates such quantities, namely local central limit theorems. They assert that:

$$\frac{e^{-n} n^n}{n!} = \mathbb{P} \left( \sum_{k=0}^{n-1} (X_k - \mathbb{E} (X_k)) = 0 \right) \sim \frac{1}{\sqrt{2 \pi n \text{Var} (X_0)}},$$

a formula which is closely liked with Gauss integral and diffusion processes. Since the variance of $X_0$ is $1$, we get:

$$n! \sim \sqrt{2 \pi n} n^n e^{-n}.$$

The catch is of course that the local central limit theorems are in no way elementary results (except for the simple random walks, and that is if you already know Stirling's formula...). The methods I know to prove such results involve Tauberian theorems and residue analysis. In some way, this probabilistic stuff is a way to disguise more classical approaches (in my defense, if all you have is a hammer, everything looks like a nail).

I think one could get higher order terms for Stirling's formula by computing more precise asymptotics for Green's function in $0$, which requires the knowledge of higher moments for the Poisson distribution. Note that the generating function for a Poisson distribution of parameter $1$ is:

$$\mathbb{E} (e^{t X_0}) = e^{e^t-1},$$

and this "exponential of exponential" will appear again in a moment.

2) A generating functions non-proof

If you want to apply analytic methods to problems related to sequences, generating functions are a very useful tool. Alas, the series $\sum_{n \geq 0} n! z^n$ is not convergent for non-zero values of $z$. Instead, we shall work with:

$$e^z = \sum_{n \geq 0} \frac{z^n}{n!};$$

we are lucky, as this generating function is well-known. Let $\gamma$ be a simple loop around $0$ in the complex plane, oriented counter-clockwise. Let us fix some non-negative integer $n$. By Cauchy's formula,

$$\frac{1}{n!} = \frac{1}{n!} \frac{\text{d} e^z}{\text{d} z}_{|_0} = \frac{1}{2 \pi i} \oint_\gamma \frac{e^z}{z^{n+1}} \text{d} z.$$

We choose for $\gamma$ the circle of radius $n$ around $0$, with its natural parametrization $z (t) = n e^{it}$:

$$\frac{1}{n!} = \frac{1}{2 \pi i} \int_{- \pi}^{\pi} \frac{e^{n e^{it}}}{n^{n+1} e^{(n+1)it}} nie^{it} \text{d} t = \frac{e^n}{2 \pi n^n} \oint_{- \pi}^{\pi} e^{n (e^{it}-it-1)} \text{d} t = \frac{e^n}{2 \pi \sqrt{n} n^n} \int_{- \pi \sqrt{n}}^{\pi \sqrt{n}} e^{n \left(e^{\frac{i\theta}{\sqrt{n}}}-\frac{i\theta}{\sqrt{n}}-1\right)} \text{d} \theta,$$

where $\theta =t \sqrt{n}$. Hitherto, we have an exact formula; note that we meet again the "exponential of exponential". Now comes the leap of faith. For $x$ close to $0$, the value of $e^x-x-1$ is roughly $x^2/2$. Moreover, the bounds of the integral get close to $- \infty$ and $ \infty$. Hence, for large $n$, we have:

$$\frac{1}{n!} \sim \frac{e^n}{2 \pi \sqrt{n} n^n} \int_{- \infty}^{+ \infty} e^{\frac{n}{2} \left(\frac{i\theta}{\sqrt{n}}\right)^2} \text{d} \theta = \frac{e^n}{2 \pi \sqrt{n} n^n} \int_{- \infty}^{+ \infty} e^{-\frac{\theta^2}{2}} \text{d} \theta = \frac{e^n}{\sqrt{2 \pi n} n^n}.$$

Of course, it is not at all trivial to prove that the equivalents we took are rigorous. Indeed, if one apply this method to bad generating functions (e.g. $(1-z)^{-1}$), they can get false results. However, this can be done for some admissible functions, and the exponential is one of them.

I have learnt this method thanks to Don Zagier. It is also explained in Generatingfunctionology, Chapter $5$ (III), where the author credits Hayman. The original reference seems to be A generalisation of Stirling's formula (Hayman, 1956), but I can't read it now.

One of the advantage of this method is that it becomes very easy to get the next terms in the asymptotic expansion of $n!$. You just have to develop further the function $e^x-x-1$ at $0$. Another advantage is that it is quite general, as it can be applied to many other sequences.

0
On

Using the Leibniz rule $\frac{d}{dt}\Bigg[\int_{a(t)}^{b(t)}f(x,t)dx\Bigg]=\int_{a(t)}^{b(t)}\frac{\partial}{\partial t}f(x,t)dx+\bigg[f(b(t),t)\frac{d}{dt}b(t)-f(a(t),t)\frac{d}{dt}a(t)\bigg]$

$$ \int_0^\infty e^{-x}dx=1\\ \text{Let }x=tu\implies dx=t.du,t>0\\ \int_0^\infty te^{-tu}du=1\implies \int_0^\infty e^{-tu}du=\frac{1}{t}\\ \frac{d}{dt}\int_0^\infty e^{-tu}du=\int_0^\infty \frac{\partial}{\partial t}e^{-tu}du=\frac{-1}{t^2}\\ \int_0^\infty ue^{-tu}du=\frac{1}{t^2}\\ \int_0^\infty u^2e^{-tu}du=\frac{2}{t^3}\\ \int_0^\infty u^3e^{-tu}du=\frac{6}{t^4}\\ \int_0^\infty u^4e^{-tu}du=\frac{24}{t^5}\\ \int_0^\infty u^ne^{-tu}du=\frac{n!}{t^{n+1}}\\ \implies \int_0^\infty \bigg[\frac{x}{t}\bigg]^ne^{-x}\frac{dx}{t}=\frac{n!}{t^{n+1}}\\ \implies \boxed{\int_0^\infty x^ne^{-x}dx=n!=\Gamma(n+1)} $$ Dividing by $\bigg[\frac{n}{e}\bigg]^n$ $$ \bigg[\frac{e}{n}\bigg]^n\Gamma(n+1)=\int_0^\infty \left[\frac{x}{n}\right]^ne^{-(x-n)}dx\\ \text{Let } s=x-n,\\ \bigg[\frac{e}{n}\bigg]^n\Gamma(n+1)=\int_{-n}^\infty \bigg[1+\frac{s}{n}\bigg]^ne^{-s}ds=\int_{-n}^\infty f(s)ds $$ where $f(s)=\bigg[1+\frac{s}{n}\bigg]^ne^{-s}$ $$ \ln(f(s))=n\ln\bigg[1+\frac{s}{n}\bigg]-s=n\bigg[\frac{s}{n}-\frac{1}{2}\bigg(\frac{s}{n}\bigg)^2+\frac{1}{3}\bigg(\frac{s}{n}\bigg)^3-\cdots\bigg]-s\\ =\frac{-s^2}{2n}+\frac{s^3}{3n^2}-\cdots $$ For large $n$, $$ \ln(f(s))\approx\frac{-s^2}{2n} $$ $$ \bigg(\frac{e}{n}\bigg)^n\Gamma(n+1)\approx\int_{-\infty}^\infty e^{-s^2/2n}ds=\sqrt{2\pi n}, \text{ since }\int_{-\infty}^\infty e^{-at^2}dt=\sqrt{\frac{\pi}{a}}\\ \Gamma(n+1)\approx\bigg(\frac{n}{e}\bigg)^n\sqrt{2\pi n}\\ \implies \boxed{n!=\Gamma(n+1)\approx \bigg(\frac{n}{e}\bigg)^n\sqrt{2\pi n}} $$

0
On

Stirling's approximation: Can we make it obvious?

Or at least make the separate steps obvious?

By induction on $n$, using integration-by-parts for the inductive step, it is easily shown that $$n! \,= \int_0^\infty\! x^n e^{-x} dx$$ for  $n=0,1,2,$ etc.  Writing ${n!=n(n{-}1)!}$ and applying the above formula to the second factor, we get $$n! \,=\, n \int_0^\infty\! x^{n-1\,} e^{-x} dx\,.$$ Changing the variable of integration to $$\color{blue}{t=\ln(x/n)}$$ (whence $x=ne^t$${dx=ne^t dt}$,  and ${t{\to}{-}\infty}$ as ${x{\to}0^+}$), we obtain, after some rearrangement, $$n! \,=\, n^{n+1\,} e^{-n} \int_{-\infty}^\infty\! e^{n(1+t-e^t)} dt\,.$$ The function ${1+t-e^t}$ has a global maximum of  $0$ at  ${t=0}$; and as $n$ increases, the values of $t$ that significantly contribute to the integral are confined to a narrower range about this maximum, i.e. smaller values of $|t|$. For small $|t|$, we can put $$e^t = 1 + t + t^{2\!}\big/2 + \mathcal{O}(t^3)\,,$$ which yields $$n! \,=\, n^{n+1\,} e^{-n} \int_{-\infty}^\infty\! e^{-n\mathcal{O}(t^3)} e^{-nt^2/2} dt\,.$$ In the integrand, in order to make the second factor $\big(e^{-nt^2/2}\big)$ larger than some threshold of significance,  $|t|$ must be smaller than something of order $\mathcal{O}(n^{-1/2})$, in which case the magnitude of the exponent $-n\mathcal{O}(t^3)$ in the first factor is also smaller than something of order $\mathcal{O}(n^{-1/2})$. For sufficiently large $n$, this is arbitrarily close to $0$, so that the first factor is arbitrarily close to $1$, and the aforesaid "significance" of the second factor determines the "significance" of the whole integrand. As the first factor is arbitrarily close to $1$, we can say $$n! \,\sim\, n^{n+1\,} e^{-n} \int_{-\infty}^\infty\! e^{-nt^2/2\,} dt\,. \label{estar}\tag{*}$$ The remaining integral has the form $$I \,= \int_{-\infty}^\infty\! e^{-at^2} dt\,,$$ whence $$I^2 = \int_{-\infty}^\infty\! e^{-ax^2\!} dx \int_{-\infty}^\infty\! e^{-ay^2\!} dy \,= \int_{-\infty}^\infty \int_{-\infty}^\infty\! e^{-a(x^2+y^2)} dx\,dy\,.$$ This is an integral w.r.t. area over the $xy$ plane. It can be re-expressed in polar coordinates as an integral w.r.t. the single variable $u = r^2 = x^2+y^2$,  and evaluated as $\pi/a$,  so that $$I=\sqrt{\pi/a}\,.$$ The integral in equation$\,(\ref{estar})$ has $a=n/2$  and therefore comes to $\sqrt{2\pi/n}$.  This gives $$n! \,\sim\, \sqrt{2\pi n}\,\Big(\!\frac{n}{\,e\,}\!\Big)^n\,,$$ which is Stirling's approximation for large $n$.