Stieltjes transform and the maximum-likelihood configuration of eigenvalues

54 Views Asked by At

In short the question is, can the Stieltjes transform of the large-matrix limit of some random ensemble tell us the maximum likelihood configuration of eigenvalues (or any other info about the joint density)? Or does it only give us moments of the marginal density?

The long background and motivation is as follows. With some colleagues, I'm reading "A First Course in Random Matrix Theory" by Potters and Bouchard, Chapter 5 -- Joint Distribution of Eigenvalues and I'm a bit confused.

The chapter begins by deriving the probability density of eigenvalues for a generic rotationally invariant ensemble with arbitrary "potential" fn $V$. The joint distribution winds up:

$$P\left(\{\lambda_i\}\right)\propto\Pi_{k<l} \left\vert\lambda_k-\lambda_l\right\vert\exp\left[-\frac{N}{2}\sum_i V\left(\lambda_i\right)\right]$$

which is very nice and gives a nice picture of the dependence btw eigenvalues, specifically that there is vanishing probability that eigenvalues will be very near to each other.

But the chapter continues to derive the "Maximum Likelihood Configuration" of eigenvalues by deriving a saddle-point eqn for the minimum energy. After much algebra they rearrange this equation as a self-consistency equation for the Stieltjes transform ($g_N\left(z\right)\equiv\frac{1}{N}\sum_i \frac{1}{z-\lambda_i}$), which looks like this:

$$V'\left(z\right)g_N\left(z\right) - A\left(z\right) = g^2_N\left(z\right) + \frac{g'_N\left(z\right)}{N}$$ where $A$ is defined as

$$A\equiv \frac{1}{N}\sum_i \frac{V'\left(z\right)-V'\left(z\right)}{z-\lambda_i}$$

Okay, so for finite $N$ that gives an equation for the Stieltjes transform of the maximum likelihood configuration, i.e. a pole for each eigenvalue. They give an exercise where we work out the maximum likelihood configuration for a 3x3 Wigner matrix, which winds up just 0 and $\pm 1$.

All of that is fine. My confusion comes in the large $N$ limit. The book opens section 5.2.3 by saying "In the large $N$ limit, $g_N\left(z\right)$ is self-averaging so computing $g_N\left(z\right)$ for the most likely configuration is the same as computing the average $g\left(z\right)$." From there the book derives self-consistency eqns for $g\left(z\right)$ in the large $N$ limit and shows how one can derive the eigenvalue density from there.

But what it doesn't show is how you can get any information about the joint distribution from $g\left(z\right)$. Later on there is a rough argument explaining why we expect the eigenvalues to be locally equidistant, and that makes sense, but my question is, does the Stieltjes transform tell us that?

In the finite $N$ case I understand because $g_N$ actually gives us the location of $N$ eigenvalues (eg including their spacings). But the infinite, ensemble average, $g$ seems (to me) to only give the marginal density and I don't see how I could derive the local equidistance property for example from it.