Expected Prediction Error for Classification

482 Views Asked by At

I am self-studying elements of statistical learning. I got stuck in the following equations: The expected prediction error for classification is given as:

$$G(x) = argmin_{g \in G} \sum _{k=1}^K L(G_k,g) Pr (G_k|X=x) $$ where L is the loss function, G is the set of possible classes, g is our predictions. Then the book says:

with the 0-1 loss function this fun simplifies to:

$$G(x) = argmin_{g \in G}[1-Pr(g|X=x)] $$

I could not understand the simplification.

1

There are 1 best solutions below

0
On BEST ANSWER

If we use the 0/1 loss, the expected error becomes the sum of the probabilities to have a different class than g :

$$G(x) = argmin_{g \in G} \sum _{1 \le k \le K,G_{k}\ne g} Pr (G_k|X=x) $$

Using P(E) = 1 - P(complement of E) :

$$G(x) = argmin_{g \in G}[1-Pr(g|X=x)] $$