I'm currently studying for my machine learning exam, and have come across a slide regarding optimizing using maximum likelihood. I get the ideas behind what optimization is used for, but I can't figure out how log(y_1, x_1,...,y_n, x_n|w) was simplified (second line).
Is it just an assumption of independence? I would be grateful if anybody can clarify.
Note that in this example x_i is the training data, and y_i is the value assosciated to that data (can be a class or a real-value if it was a regression problem)