In the Elements of Statistical Learning on page 108 there is a log ratio of conditional probabilities of belonging to the category given input data $P(G=k|X=x)$ where $f_k(x)=\frac{1}{(2\pi)^{p/2}|\sum_k|^{1/2}}e^{-(1/2(x-u_k)^T\sum_k^{-1}(x-u_k)}$ is a multivariate normal.
$\log\frac{P(G=k|X=x)}{P(G=l|X=x)} = \log \frac{f_k(x)}{f_l(x)} + \log \frac{\pi_k}{\pi_l}$, where $\pi_k$ is the prior probability of class $k$
Now there is an assumption that $\sum_k = \sum$ for all $k$ and it is derived that
$\log\frac{P(G=k|X=x)}{P(G=l|X=x)} = \log \frac{\pi_k}{\pi_l} - 1/2(u_k+u_l)^T\sum_k^{-1}(u_k-u_l) + x^T\sum^{-1}(u_k-u_l)$
but as I do: $\log\frac{P(G=k|X=x)}{P(G=l|X=x)} = \log \frac{\pi_k}{\pi_l} + \frac{-1/2(x-u_k)^T\sum^{-1}(x-u_k)}{-1/2(x-u_l)^T\sum^{-1}(x-u_l)}$
So how they are equal?
Also what is a linear discriminant function? How is that computed?
Ok, I think I found the answer here: https://yintingchou.com/posts/lda-and-qda/ I will not delete the post so if anybody will need it they can find the solution. The key point is to maximize $P(G=k|X=x)$ with respect to $k$, however I am not sure about the role of log ratio here.