Adaboost intuition

95 Views Asked by At

The intuition behind adaboost is that if a decision stump is performing well, i.e. $\alpha_t > 0$ by a significant amount, then we'll assign more weights to the misclassified instances and less weight to the correctly classified instances. We would like to focus on what we got wrong.

My question is, what is the vice-versa here? If a decision stump is not performing well, i.e. $\alpha_t < 0$ by a significant amount, then we focus more on the correctly classified instances? What is the intuition behind this?

1

There are 1 best solutions below

0
On

$a_s$ is always positive.

One computes the weight assigned to a particular stump classifier $\alpha_s$ by

$\alpha_s = \frac{1}{2} \ln(\frac{(1 - E^s)}{E^s})$

Note the following:

$E^s < \frac{1}{2} \mbox{ and } (1 - E^s) > \frac{1}{2} \mbox{, so } E^s < (1 - E^s) \mbox{, therefore, } \frac{(1 - E^s)}{E^s} > 1$
Since $\frac{(1 - E^s)}{E^s} > 1$, values of $\alpha$ must be positive (the x-intercept of $y = \ln(x)$ is $(1, 0)$).