Question
The Expectation-maximization algorithm is an alternating algorithm. This means that it alternates between the M-step (maximization step) and the E-step (expectation step). My question is how does one get the maximization step in the context of this notation. I have found the EM algorithm's definition discussed in Bishop's Pattern Recognition and machine learning on page 438 in 2 alternative forms and a derivation on youtube using similar notation. The full derivation of the E step and the maximization of the E-step with regards to $\pi_{j}$.
In my notation I had finished the E step as
\begin{align} = \sum_{i=1}^{n}W_{i1} \left(log (1-\sum_{j=2}^{K}\pi_j) -\frac{1}{2} log(|\Sigma_1|) -\frac{d}{2} log(2\pi) -\frac{1}{2}(x_i-\mu_1)^{T} \Sigma_{1}^{-1}(x_i-\mu_1) \right)+ \sum_{i=1}^{n}\sum_{j=2}^{K} W_{ij} \left( log(\pi_j) -\frac{1}{2} log (|\Sigma_j|) -\frac{d}{2} log(2\pi) -\frac{1}{2}(x_i-\mu_j)^{T} \Sigma_{j}^{-1}(x_i-\mu_j)\right) \end{align}
The next step would be the M step, which put simply is maximizing the above function in terms of $\mu$ and $\Sigma$.
Summary
How do I maximize the above function with regards to $\mu$ and $\Sigma$. I am making a few mistakes in my attempts and would like to please see how this is done.