Say im running a classification Machine learning algorithm,
of 2 classes 0 & 1.
A 0 label is detecting a visitor/row did not Convert. while 1 label is detecting a visitor/row did Convert.
When the Precision of the model is 0.89.
And the Precision of Label 0 is 1.00 & the Precision of Label 1 is 0.51.
Can someone explain what each of these 3 results mean?
Does it mean that our model predicts a visitor will convert 89% of the time?
It correctly predicted which visitors would convert 51% of time, and predicted which visitors will NOT convert 100% of the time?
Since precision in ML means TP/(TP+FP).
When the precision of Label 0 is 1.00, this means if a given label is True for label 0, it will predict it always right (If we assume coming test data perfectly resembles the train data) but since precision is not saying anything about Negatives(TN, FN), if a given label is false for label 0 we can not make assumptions about it with precision information.
The same applies to Label 1. If the precision of Label 1 is 0.51, this means if a given label is True for Label 1 it will predict accurately with the probability of 0.51.
About the model's precision, the model's precision is calculated "as the sum of true positives across all classes divided by the sum of true positives and false positives across all classes." So it doesn't a given true data will be predicted accurately with the probability of 0.89, since one's FP also means other class's FP. It gives a metric about True positives among all positives.
If you want to learn about how likely your model predicts accurately, you should use the accuracy metric.
Hope this answers your question.