Bayes Risk detector hypothesis testing

174 Views Asked by At

When deriving the Bayes Optimal Detector / classifier, i.e. the thing that minimizes the Bayes Risk for a binary or multiple hypothesis testing, it seems like every derivation I've seen assumes the misclassification costs are constants. E.g. in the minimum probability of error classifier, Cij = 1 if i not= j, and =0 if i=j. And more generally they are some constants, and e.g. in the binay hypothesis setting, they usually mention making the reasonable assumption that C_10 > C_00, and C_01 > C_11.

E.g. see appendices 3B and 3C on pages 90-93, and the main discussion earlier in that chapter: https://www.scribd.com/doc/177316319/Steven-M-Kay-Fundamentals-of-Statistical-Signal-Processing-Volume-2-Detection-Theory-1998

My question is if these really have to be constants. E.g. could C_ij be a function of the data x? Maybe the cost of choosing H_i instead of H_j is not constant but actually depends on where the data is in the decision regions?