How to evaluate the quality of the probability distribution output of a classifier?

79 Views Asked by At

In a classification problem, I have trained a neural network which outputs class probabilities for a given input. For a new input, I now want to evaluate the "quality" of the neural network's classification score. I'm not looking to evaluate the quality of the overall neural network, but I want the quality for that specific input. What are some examples of ways to measure this?

A simple way would be to just give a 1 or 0 depending on whether the neural network classified the input correctly. Or to get a continuous number, you could take the value that the neural network outputs for the ground-truth class. However, this does not consider all the other classes. It makes sense that if the ground-truth is class 1, then if the neural network outputs (class1=0.6, class2=0.2, class3=0.2) then this is better than (class1=0.6, class2=0.3, class3=0.1) because in the second case, the neural network is less confident of the distinction between class1 and class2. This reminds me of entropy, although in this case, we do know the ground truth.

Any suggestions? Thanks.