Gaussian integral over all possible real matrices

405 Views Asked by At

I am trying to compute the following gaussian integral over all possible real matrices $J$:

$$I=\int \exp\left\{-\frac{N}{2}\text{Tr}\left[\mathbf{J}\mathbf{A}\;\mathbf{J}^T+2\mathbf{BJ}-\gamma \mathbf{JJ} \right]\right\}\mathrm{d}\mathbf{J}$$

Where $\mathbf{A}$ and $\mathbf{B}$ are Hermitian matrices.

When $\gamma=0$ I can complete the square and integrate this Gaussian integral without any problem (assuming I know the eigenvalues and determinant of $\mathbf{A}$):

$$\mathbf{J}\mathbf{A}\;\mathbf{J}^T+2\mathbf{BJ}=\left(\mathbf{J}^T-\mathbf{B}\mathbf{A}^{-1}\right)\mathbf{A}\left(\mathbf{J}-\mathbf{A}^{-1}\mathbf{B}\right)-\mathbf{B}\mathbf{A}^{-1}\mathbf{B}$$

However for general $\gamma\in \mathbb{R}$ I cannot seem to know how to evaluate this integral by completing the square: $\mathbf{J}^T\mathbf{A}\;\mathbf{J}+2\mathbf{BJ}-\gamma \mathbf{JJ}$

$\mathbf{J}$ is real but not symmetric. when $\gamma=0$ this integral converges so I do not see any reason why it would not be generalised to general $\gamma$ with an appropriate $\mathbf{A}$.

Any remark or advice is always appreciated. Thank you.

Edit : A different way to express the integral $I$ is the following:

$$I=\int \left(\prod_{ij}\mathrm{d}J_{ij}\right)\exp\left\{-\frac{N}{2} \sum_{i, j, k} J_{k i} A_{i j} J_{k j}+N\sum_{k, j} B_{k j} J_{k j}+\frac{N\gamma}{2}\sum_{ij}J_{ij}J_{ji}\right\}$$

Assuming I already know the eigenvalues of $\mathbf{A}$ and thus $\det(\mathbf{A})$, how can I compute the integral $I$?

2

There are 2 best solutions below

0
On

In principle you could write the $J$ matrix as a $N\times N$ long "super"vector and then you would have a "simple" quadratic form $J_{ij} \Gamma^{ijkl} J_{kl}$ where all the transposes etc are encoded in the $\Gamma$ super matrix. By redefining an index pair ${ij}=\alpha$ you could put the $\Gamma$ tensor in a "super" matrix form and find the relevant determinant etc. It looks daunting, but maybe there are some shortcuts, e.g. there is a super matrix $\mathcal T$ that transforms any "super"vector into it's transpose.

5
On

Following the idea of writing the $J$ matrix as a $N^2$ long "super" vector: my starting point is the following result: $$\int_{\mathbb{R}^{n^{2}}} \exp\left\{-\frac{1}{2} \mathbf{x}^{T} \mathbf{\Sigma} \mathbf{x}\right\}\mathrm{d}\mathbf{x}=\frac{(2 \pi)^{n^{2} / 2}}{\sqrt{\operatorname{det}(\Sigma)}}$$ Where $\mathbf{\Sigma}\in \mathbb{R}^{N^2\times N^2}$

Integrating over all $\mathrm{d}J_{ij}$ is equivalent to integrating over all $\mathrm{d}x_i$ if we find an appropriate transformation. The main obstacle is this $\sum_{ij}J_{ij}J_{ji}$ sum. So I will consider here the following integral:

$$I=\int\left(\prod_{i j} \mathrm{d} J_{i j}\right) \exp \left\{-\frac{1}{2} \sum_{i, j, k} J_{k i} A_{i j} J_{k j}-\frac{b}{2} \sum_{i j} J_{i j} J_{j i}\right\}$$

First, we can write: $$b\sum_{ij}J_{ij}J_{ji}=\sum_{i=1}^{n} \sum_{j=1}^{n} \sum_{k=1}^{n} \sum_{l=1}^{n} J_{i j} J_{k l} \;\delta_{i l} \delta_{j k}b$$ Similarly: $$\sum_{i, j, k} J_{k i} A_{i j} J_{k j}=\sum_{i, j, k} J_{i j} A_{jk} J_{i k}=\sum_{i, j, k,l} J_{i j} A_{jk} J_{l k}\delta_{il}=\sum_{i, j, k,l} J_{i j} A_{jl} J_{k l}\delta_{ik}$$ Thus our integral is now: $$I=\int\left(\prod_{i j} \mathrm{d} J_{i j}\right) \exp \left\{-\frac{1}{2} \sum_{i, j, k,l} J_{i j} A_{jl} J_{k l}\delta_{ik}-\frac{1}{2}\sum_{i,j,k,l}^{n}J_{i j} J_{k l} \;\delta_{i l} \delta_{j k}b\right\}$$

$$\implies I=\int\left(\prod_{i j} \mathrm{d} J_{i j}\right) \exp \left\{-\frac{1}{2} \sum_{i, j, k,l} J_{i j} \left(A_{jl} \delta_{ik} +\delta_{i l} \delta_{j k}b\right)J_{k l}\right\}$$ Now, we can define $x$ such that : $$x:=\left(\begin{array}{c} J_{11} \\ J_{12} \\ \vdots \\ J_{1 n} \\ J_{21} \\ J_{22} \\ \vdots \\ J_{n n} \end{array}\right) \in \mathbb{R}^{n^{2}}$$

Therefore: $$J_{i j}= x_{n(i-1)+j} \quad \forall i, j \in \mathbb{N} \cap[1, n]$$ \begin{equation} \Sigma_{n(i-1)+j, n(k-1)+l}= A_{jl} \delta_{ik} +\delta_{i l} \delta_{j k}b\quad \forall i, j, k, l \in \mathbb{N} \cap[1, n] \end{equation}

For N=2 we have: $$\Sigma=\left( \begin{array}{cccc} A_{1,1}+b & A_{1,2} & 0 & 0 \\ A_{2,1} & A_{2,2} & b & 0 \\ 0 & b & A_{1,1} & A_{1,2} \\ 0 & 0 & A_{2,1} & A_{2,2}+b \\ \end{array} \right)$$ For general $N\in \mathbb{N}$ we recover : $\Rightarrow x^{T} \Sigma x=\sum_{i,j,k,l}^{n} J_{i j}\left(A_{jl} \delta_{ik} +\delta_{i l} \delta_{j k}b\right) J_{k l}$

Using our initial result we see that $$I=\frac{(2 \pi)^{n^{2} / 2}}{\sqrt{\operatorname{det}(\Sigma)}}$$

Thus our problem is reduced to finding the determinant of the following $N^2\times N^2$ matrix: $\Sigma_{n(i-1)+j, n(k-1)+l}= A_{jl} \delta_{ik} +b \delta_{i l} \delta_{j k}$. This is where my answer ends. It would be interesting to derive a formula to compute the determinant of $\Sigma$.

After tinkering I find that:

In the case $N=2$:

$\operatorname{det}(\Sigma)=|A+bI|(|A|-|bI|)$

In the case $N=3$:

$\operatorname{det}(\Sigma)=|A+bI|\left(|A|^2-b^2\operatorname{Tr}(A)|A|-|bI|^2+b^4\operatorname{Tr}(\Lambda^2A)\right)$

Where $\operatorname{Tr}\left(\Lambda^2A\right)$ is the trace of the $k$th exterior power of A, taken from this formula: $$ |A-bI|=-\sum_{k=0}^{n} b^{n-k}(-1)^{k} \operatorname{tr}\left(\Lambda^{k} A\right) $$ It would be interesting to solve for general $N$. I might create a new post on this question.