In Pattern Recognition and Machine Learning by Bishop below is a partial reproduction of equation (2.211) on page 115. I'm not seeing how the LHS equals the RHS. What steps would produce this result? Reading this book is proving quite difficult to follow.
$$ \exp\left\{ \sum_{k=1}^{M-1} x_k \ln \mu_k+ \left( 1 - \sum_{k=1}^{M-1} x_k \right) \ln \left( 1- \sum_{k=1}^{M-1}\mu_k \right)\right\} \\= \exp\left\{ \sum_{k=1}^{M-1} x_k \ln \left( \frac{\mu_k}{1 - \sum_{j=1}^{M-1} \mu_j} \right) + \ln \left( 1- \sum_{k=1}^{M-1}\mu_k \right)\right\} $$
I assume it has to do with the constraints that: $\sum_{k=1}^{M} u_k=1$, $0 \le u_k \le 1$, and $\sum_{k=1}^{M} x_k=1$ since $x_k$ is an element of a one-hot vector, e.g. if $M = 6$ and $x_3 = 1$ then $\mathbf{x}=(0, 0, 1, 0, 0, 0)^\mathsf{T}$.
Comment:
In (1) we multiply out.
In (2) we collect the terms of both sums into one sum.
In (3) we use $\ln(a)-\ln(b)=\ln\left(\frac{a}{b}\right)$ .