I'm looking for the second-order moments of the fourth order cumulant. This is the same as finding the the second-order moments of the covariance estimate which could be calculated as follow [1]:
Consider the signal model given by
$\textbf{y}(t) = \textbf{A} \textbf{x}(t) + \textbf{n}(t)$
where the complex observation vector $\textbf{y} \in \mathbb{C}^m$, $\textbf{A} \textbf{x}$ models the signals of interest, and $\textbf{n}$ is an additive measurement noise.
The sample covariance matrix $\hat{\textbf{R}}$ is given by
$\hat{\textbf{R}} = \frac{1}{N} \sum_{1}^{N} \textbf{y}(t) \textbf{y}^H(t)$
where $N$ is the number of available snapshots, and $^H$ is for the complex transpose.
Let $\hat{\textbf{r}} = \text{vec}(\hat{\textbf{R}})$, where vec is for a concatination version of the matrix.
Then the second-order moments of the covariance estimates could be determined as follow
The $i$th $(m \times 1)$ subvector of the $(m^2 \times 1)$ vector $\hat{\textbf{r}}$ is given by
$\hat{\textbf{r}}_i = \frac{1}{N} \sum_{1}^{N} \textbf{y}(t) \textbf{y}_i(t)$
Since the signals are assumed independent from snapshot to snapshot and circularly Gaussian distributed, the following relation is easily established
$E\{\hat{\textbf{r}}_i \hat{\textbf{r}}^H_j\} = \frac{1}{N^2} \sum_{t=1}^{N} \sum_{s=1}^{N} E\{\textbf{y}(t) \textbf{y}^H_i(t) \textbf{y}^H(s) \textbf{y}_i(s)\} = \textbf{r}_i \textbf{r}_j^H + \frac{1}{N} \textbf{R}_{ji} \textbf{R}$
which results in
$E\{(\hat{\textbf{r}}-\textbf{r}) (\hat{\textbf{r}}-\textbf{r})^H\} = \frac{1}{N}(\textbf{R}^T \otimes \textbf{R})$
where $\otimes$ denotes the Kronecker matrix product.
Using the previous part, we are able to find second-order moments of the covariance. However, I'm looking for the second-order moments of the fourth order cumulant.
[1] Ottersten, Björn, Peter Stoica, and Richard Roy. "Covariance matching estimation techniques for array signal processing applications." Digital Signal Processing 8.3 (1998): 185-210.