I am studying classification with the book "Pattern Recognition and Machine Learning" written by C. Bishop.
I tried to demonstrate a result of the book but I did not find the same result as the one in the book.
Given that $$ a_k = \ln{p(x|C_k)p(C_k)}$$ and $$ p(x|C_k) = \frac{1}{(2 \pi)^{D/2}} \frac{1}{|\Sigma|^{1/2}} \exp \Bigg (-\frac{1}{2}(x-\mu_k)^T \Sigma^{-1} (x-\mu_k) \Bigg )$$
I would like to show that $$a_k = w_k^T x+ \omega_{k0} $$
with $$ w_k = \Sigma^{-1} \mu_k$$ and $$ \omega_{k0} = -\frac{1}{2} \mu_k^T \Sigma^{-1} \mu_k + \ln p(C_k)$$ I also precise that $x$ and $\mu_k$ are D-dimensional vectors and $\Sigma$ is a covariance matrix.
These equations can be found pages 198 and 199, (Eq 4.63) and (Eq 6.68).
Here what I found $$a_k = \mu_k^T \Sigma^{-1} x -\frac{1}{2} \mu_k^T \Sigma^{-1} \mu_k + \ln p(C_k) -\frac{1}{2} x^T \Sigma^{-1} x -\frac{2}{D} \ln(2 \pi) -\frac{1}{2} \ln|\Sigma|$$
You can see that I have three additional terms, do you know how I can eliminate these terms?