This is in reference to the zero-one loss example in the Duda hart book. The figure shows likelihood ratio plotted against x. In the image attachedhere, the example states that there is set threshold if the errors are treated equally. But if we penalize classifying a state w1 as w2, more than the converse, the threshold increases(in the graph), and that the range x values for which we classify w1 gets smaller.
The threshold is calculated as L(1|2)*P(w2) / L(2|1)*P(w1) , given L(i|i)=0. In the example , L(2|1)>L(1|2). Shouldn't this decrease the threshold, thus make sure that more values are classified as w1.
Even on thinking analytically, if we penalize the decision to misclassify w1 as w2 more than converse, wouldn't it push the classifier to give more answers as w1. What am I missing here?
PS: this is my first question here. I am going to be using this exchange more and more given I am self-studying. Any feedback on how to phrase a question is also welcome.