I get how to use differentials to compute error, but why is it a "good" method? For example, a standard problem is something like:
If the radius of a circle is $3 \pm 0.1$ cm, find the area with error.
What's wrong/undesirable with just finding the area for $r=2.9$ and $r=3.1$ to determine the error?
The nice thing about using differentials is that as you adjust the error, you don't need to compute the highest and lowest again; you just need to multiply some coefficient.
So, for your example, we might have a situation where we ask "if the radius $3 \pm 0.1$ cm, what's the error?" But the more useful question is "if we want the area to be within $.01$ of $3 \text{cm}^2$, how small does the error in the radius need to be?" In this case, we could solve for $\Delta r$, saying that if our radius is $3 \pm \Delta r$ cm, then $$ \Delta A \approx 2 \pi (3 \text{cm})\Delta r \leq .01 \text{cm} $$ and solve for $\Delta r$.
On the other hand, doing things the other way, we would have to solve something like $$ \Delta A = \pi (3\text{cm} + \Delta r)^2 - \pi (3\text{cm})^2 \leq .01 \text{cm} $$ In this situation, we simply have a quadratic equation, but in others we could wind up with something much more complicated.