Neural Networks Miscalibration Measure

48 Views Asked by At

I have read two papers related on a neural networks miscalibration problem (first: "On Calibration of Modern Neural Networks", link: https://arxiv.org/abs/1706.04599 ; second: "Multivariate Confidence Calibration for Object Detection", link: https://openaccess.thecvf.com/content_CVPRW_2020/papers/w20/Kuppers_Multivariate_Confidence_Calibration_for_Object_Detection_CVPRW_2020_paper.pdf).

In the second paper authors propose to use various estimations for probability of neural networks predictions instead its confidence. But after calibration they use confidence (no the calculated estimate of probability) to calculate miscalibration measure D-ECE.

I have problem to understand miscalibration measure calculation in the first paper. What are values (confidences or temperature scaled confidences), that used by its authors to calculate miscalibration measure ECE after calibration? I can't find an explanation of this point in the paper and in a code at a related github repository. At first I thought, that temperature scaled confidences are used for this. But now I have uncertainty, because in the second paper its authors use original confidences to calculate miscalibration.

1

There are 1 best solutions below

0
On

If you're using temperature scaling, you use the temperature scaled confidences to compute the metrics for calibration, such as ECE.