Given $\bf{x}$$_1$,...,$\bf{x}$$_n$ be a sequence of random vectors that are independent and identically distributed from $N_p(\mu_0,\Sigma)$ where $\mu_0$ is known.
(i) Show that the MLE for $\Sigma$ is;
$$ \widehat{\Sigma}=\frac{1}{n}\sum\limits_{i=1}^n(\mathbf{x}_i-\mu_0)(\mathbf{x}_i-\mu_0)' $$
(ii) Now let $\bf{y}$$_1$,...,$\bf{y}$$_m$ be another sequence of random vectors that are independent and identically distributed from $N_p(\mu_1,\Sigma)$ where $\mu_1$ is known. Calculate the MLE for $\Sigma$ based on the sequences of observations {$\bf{x}$$_1$,...,$\bf{x}$$_n$} and {$\bf{y}$$_1$,...,$\bf{y}$$_m$}. What happens to $\Sigma$ if both $\mu_i$ for $i=0,1$ are assumed to be unknown?
I do not have any questions on part (i), I have already shown that the MLE for $\Sigma$ is indeed $\widehat{\Sigma}$. My concerns are towards part (ii). I don't quite understand in what ways $\widehat{\Sigma}$ will change under this scenario and how to represent it notation-wise. Also, when $\mu_i$ for $i=0,1$ are assumed to be unknown; do we simply replace $\mu_0$ and $\mu_1$ in the new MLE for $\Sigma$ by $\hat{\mathbf{\mu}}_0$ and $\hat{\mathbf{\mu}}_1$?
Any form of help is much appreciated.