Assume $X \in R^{m\times n} $ has each entry being i.i.d. Guanssian $N(0,1)$. The $A\in R^{m\times m}$ and $B \in R^{n \times n}$ are two fixed symmetric matrices.
Suppose our observations are $M_n = AX_nB$, where $X_n$ denotes the $n$th realization of random matrix $X$. How can we estimate the matrix $A$ and $B$ from the observations $M_n$?
Here is a partial answer based on one-sided correlation matrices.
Consider the one-side correlation matrix $\mathbb{E}\{M_k M_k^T\}$ (I'm assuming your index $n$ has nothing to do with the size of the matrix $B$, right? I'm renaming $n$ to $k$ to avoid confusion.). We can expand it by inserting $M_k = A X_k B$ into $\mathbb{E}\{A X_k B B^T X_k^T A^T\} = A R A^T$, where $R = \mathbb{E}\{X_k B B^T X_k^T\}$. Let's also define $Q = B B^T$ for brevity. Now, we can expand $R$ into $$R = \mathbb{E}\left\{\sum_{m_1} \sum_{m_2} [X_k]_{(:,m_1)} [Q]_{(m_1,m_2)} [X_k]_{(:,m_2)}\right\} = \sum_{m_1} \sum_{m_2}[Q]_{(m_1,m_2)}\mathbb{E}\left\{ [X_k]_{(:,m_1)} [X_k]_{(:,m_2)}\right\}.$$
where $[\cdot]_{(:,m)}$ represents the $m$-th column of its argument. Since $X$ is i.i.d. with zero mean and variance one, the expectation is equal to zero for $m_1 \neq m_2$ and equal to an identity for $m_1 = m_2$. Hence $R = \sum_{m} Q_{(m,m)} \cdot I_m = {\rm tr}(Q) \cdot I_m = \left\|B\right\|_{\rm F}^2 \cdot I$. Consequently, we have $$\mathbb{E}\{M_k M_k^T\} = A \cdot A^T \cdot\|B\|_{\rm F}^2.$$ With similar reasoning, we can show that $$\mathbb{E}\{M_k^T M_k\} = B^T \cdot B \cdot\|A\|_{\rm F}^2.$$
Therefore, the one-sided correlation matrices provide the Gramian matrices of $A$ and $B$, from which $A$ and $B$ can be recovered via the matrix square-root. Of course, this process is not unique since the square-root factorization is only unique up to a unitary matrix.
I am not sure if this ambiguity is inherent in your estimation problem or if it is due to the way the estimation was made based on one-sided correlations. Maybe someone else can comment on this.