I'm reading the Deep Learning Book by Goodfellow, Bengio and Courville. I'm struggling to follow an argument in Chapter 5.5 (pages 129-130);
How do they make the jump from equation 5.57 to 5.58? Have they implicitly taken a log of the result of the product, allowing them to use the log product expansion rule? I.e.
$$\log (ab)=\log(a)+\log(b)$$
Or is this a special case where the argmax allows for this transformation?

Take the log of the product gives the sum of log $$\log (xy)=\log(x)+\log(y)$$ This is true since log is increasing function