I am working with a complex Gaussian distributed sequence $x \sim CN(0, \beta\boldsymbol{C})$, where $\beta$ is an unknown scalar and $\boldsymbol{C}$ is a known semi-defined matrix. The received signal is given by $y = x + n$, where $n\sim CN(0,\sigma^2I)$ denotes the AWGN noise.
My goal is to estimate the unknown scalar $\beta$ from the samples of $y$. To achieve this, I define the samples $Y = [y_1, ..., y_L]$, containing $L$ samples, and the sample covariance matrix $S = \frac{1}{L}YY^H$. The maximum likelihood (ML) estimation of $\beta$ can be expressed as:
$$\min_{\beta}~\log(\det(\beta C+\sigma^2 I))+\text{tr}((\beta C+\sigma^2 I)^{-1}S).$$
Based on the plotting results, I have observed that the optimization function appears to be a quasi-convex function. However, I need to formally prove this. Could anyone please provide guidance on how to prove the quasi-convexity of this optimization function?
Thank you in advance for your assistance.