This is a practical, and basic question. I have a multivariate Gaussian in $M$ dimensions with center $\mu$ (known, lets assume $0$) and some points $p$ where I have the value of
$$ \ln(L)= -\frac{1}{2}\mathbf{x}^{\rm T}\boldsymbol\Sigma^{-1}\mathbf{x} $$
My questions is how do I find $\Sigma$ exactly, given $\ln(L)$ and $\mathbf{x}$ values? I.e. which quantities go into a linear matrix equation solver $Ax=b$.
In particular, I am confused how to make a system of linear equations built from this one matrix multiplication.
This Wikipedia article on estimation of covariance matrices is relevant.
If $\Sigma$ is an $M\times M$ variance of a $M$-dimensional Gaussian, then I think you'll get a non-unique answer if the sample size $n$ is less than $M$. The likelihood would be $$ \log L(\Sigma) \propto -\frac n2\log\det\Sigma - \sum_{i=1}^n x_i^T \Sigma^{-1} x_i. $$ In each term in this sum $x_i$ is a vector in $\mathbb R^{M\times 1}$. The value of the constant of proportionality (dismissively alluded to by "$\propto$") is irrelevant beyond the fact that it's positive.
You omitted the logarithm of the determinant and all mention of the sample size.
To me the idea (explained in detail in the linked Wikipedia article) that it's useful to regard a scalar as the trace of a $1\times1$ matrix was somewhat startling. I learned that in a course taught by Morris L. Eaton.
What you end up with --- the value of $\Sigma$ that maximizes $L$ --- is the maximum-likelihood estimator $\widehat\Sigma$ of $\Sigma$. It is a (matrix-valued) random variable in its own right because it depends on $x_i$. It would have a Wishart distribution with $n$ degrees of freedom. It is more usualy to treat $\mu$ as unknown and substitute its maximum-likelihood esimator for it, and since that is itself a random variable, that changes the distribution of $\widehat\Sigma$ so that it's a Wishart distribution with $n-1$ degrees of freedom, and that's what you'll find in most of the literature on this and in particular in the Wikipedia article.