The Kullback-Leiber divergence is defined the following way:
If $A$ and $B$ are continuous probability distributions on $X$, then
$$KL(A,B) = \int_{X} A(x) \log \left(\frac{A(x)}{B(x)}\right)dx$$
We've learned that this value is always non-negative.
However let's take $X = [0,1]$ and $B(x) \equiv 1$, which is a valid continuous p.d.f. Then
$$KL(A,B) = \int_0^1 A(x) \log \left(\frac{A(x)}{1}\right)dx = \int_0^1 A(x) \log \left(A(x)\right)dx = -H(A) \le 0$$
Where $H(A)$ denotes the entropy of distribution $A$, which we've also learned is non-negative. It's defined for continuous variables the following way:
$$H(A) = -\int_{X} A(x) \log \left(A(x)\right)dx \ge 0$$
(And therefore, $-H(A)$ is the value shown above.)
Since $A$ can be anything, we can choose one such that $H(A) > 0$, so $-H(A) < 0$.
But this contradicts with what I have learned. Where did I go wrong?