Black magic behind Padé approximation

366 Views Asked by At

In condensed matter physics one usually calculates the so-called Matsubara Green's function in the set of discrete points $G (i \omega_n)$, where $$ \omega_n = (2n+1) \pi T $$

Physically significant Green's function, however, is retarded related as follows $$ G_R (\omega) = \lim_{\varepsilon \to 0} G (\omega + i \varepsilon) $$ assuming that the unique analytical continuation $G (z)$ exists (it seems it does) with the property $G (z) \equiv G_R (z)$ in the upper half-plane and $G (z) \equiv G_A (z)$ in the lower half-plane.

The task usually is: given set of values $G (i \omega_n)$ (computed in some previous simulation), determine $G (z)$ (and all the other Green's functions, if that's one's desire).

The underlying theorem about existence and uniqueness of such continuation is of Baym, Mermin in 1961 paper Determination of Thermodynamic Green's Functions in Journal of Mathematical Physics. The theorem, however useful, does not give the answer to "how to practically obtain such continuation?"

What physicist do is the Pade approximation. Recipe would be: define the function $P (z)$ as follows $$ P (z) \equiv \frac{p_0 + p_1 z + \cdots + p_n z^n}{q_0 +q_1 z + \cdots + q_m z^m} $$

Now set up appropriate number of equations in the form $$ G (i \omega_0) = P (i \omega_0), \quad G (i \omega_1) = P (i \omega_1), \quad \cdots $$

Solve for the unknown coefficients $\left\{ p, q \right\}$.

Plugging the real frequency with a small imaginary part into $P$ should yield the result for the retarded and advanced Green's function $$ G_R (\omega) = P (\omega + i 0^+), \quad G_A (\omega) = P (\omega - i 0^+) $$

And here is where the magic comes: it works. This might sound silly, but recently I've been playing with this in the following way. I made up a very ugly function $A(x)$ (e.g. sum of 20 Gaussians with random parameters). Then I pretended that this is the spectral function for some Green's function, so I calculated the Matsubara Green's function $$ G (i \omega_n) = \int \limits_{-\infty}^\infty \frac{\mathrm{d} \omega \, A (\omega)}{i \omega_n - \omega} $$

Then I calculated the Pade approximation $P (z)$, obtaining $G_R (\omega)$ and finally getting back my original function by the means of Sokhotskij-Plemelj theorem: $$ A (x) = - \frac{1}{\pi} \lim_{\varepsilon \to 0^+} G_R (\omega + i \varepsilon) $$

The "data" $G (i \omega_n)$ had to be calculated VERY precisely - at least 20 or 30 significant digits. The number of equations for the Pade approximation had to be at least 40 to reproduce the original function. What baffles and astounds me most is, that for every temperature, i.e. very small like $10^{-5}$, or big, like $1$, this worked, provided the data $G (i \omega_n)$ is precise enough and the number of equations is sufficient. This is quite strange, as for the very low temperatures, what you really approximate is very small portion of the Matsubara Green's function, imagine a function like $1/(1+x^2)$ and now take the first 50 Matsubara points, however at the temperature $10^{-5}$, so what you see is almost a constant. Yet the recipe works. Now imagine some large temperature, like $1$ and do the same. It still works.

If someone wants to play with that, I share my Mathematica code with you. Copy it to your notebook and save the notebook somewhere.

This generates some funny function $A(x)$ and plots it:

n = 20;
SetDirectory[NotebookDirectory[]];
sig = Table[RandomReal[] + 0.1, {i, 1, n}];
x0 = 4 Table[RandomReal[] - 0.5, {i, 1, n}];
h = 2 Table[RandomReal[] - 0.5, {i, 1, n}];
A[x_] := Sum[h[[i]] Exp[-((x - x0[[i]])^2/sig[[i]]^2)], {i, 1, n}]
Plot[A[x], {x, -5, 5}, PlotRange -> {-5, 5}]

The following code generates data for Matsubara Green's function at temperature $T$

T = 1/1000; w[l_] := (2 l + 
    1) Pi T; wmax = 1; k = 100; acc = 10;
Monitor[MatsG = 
   Table[{w[l], 
     NIntegrate[A[x]/(
      I w[l] - x), {x, -Infinity, Infinity}, 
      WorkingPrecision -> acc, PrecisionGoal -> acc, 
      AccuracyGoal -> acc, MaxRecursion -> 500]}, {l, 1, 2 k}];, l]

This creates the continuation GComplex (I'm using $m = n+1$, because Green's function must approach zero for $|z| \to \infty$ and I've set $q_m = 1$, so I don't make it an eigenvalue problem). Just keep the number acc high enough so Mathematica doesn't complain about the bad conditioned matrix. $2k$ is the number of equations, you can tweak that to see what it does:

k = 30; acc = 300;
ClearAll[p]
f[w_] := Subscript[p, 0] + Sum[Subscript[p, j] w^j, {j, 1, k - 1}];
g[w_] := Subscript[q, 0] + Sum[Subscript[q, j] w^j, {j, 1, k - 1}] + 
   w^k;
vars = Flatten@
   Table[{Subscript[p, i], Subscript[q, i]}, {i, 0, k - 1}];
eqns = Table[
   SetAccuracy[
    f[I MatsG[[i + 1, 1]]] == 
     MatsG[[i + 1, 2]] g[I MatsG[[i + 1, 1]]], acc], {i, 0, 2 k - 1}];
sol = Solve[eqns, vars, WorkingPrecision -> acc];
GComplex[w_] := f[w]/g[w] /. First@sol;

And finally comparing $A(x)$ with the newly generated function:

Plot[{A[x], -(1/Pi) Im[GComplex[x]]}, {x, -6, 6}, 
 PlotRange -> {-5, 5}]

I'm very sorry if this is considered an off-topic here, but I'm really surprised this works in every case of the trial function $A(x)$ I did this with and I want to know what theory is behind this black magic box. Does anyone know? Thank you.