I am using Sean Carroll's book "Spacetime and Geometry" to learn about differential topology from a physics point of view. After introducing vectors on a manifold, he defines the commutator of two vector fields as (Google books link, p. 67)
$$ [X, Y](f) = X(Y(f))-Y(X(f)) \tag{2.20} $$
At the bottom of the page, he states
Note that since partials commute, the commutator of the vector fields given by the partial derivatives of coordinate functions, $\{\partial_\mu\}$, always vanishes.
I don't understand this statement. As far as I understand, for a given chart and curve with parameter $\lambda$ on the manifold, we can write a vector as
$$ X = X^\mu \hat e _{(\mu)}\quad\to\quad \frac{d}{d\lambda} = \frac{dx^\mu}{d\lambda}\partial_\mu \tag{2.16} $$
where I used the coordinate basis. But what does the author mean by "the vector fields given by the partial derivatives of coordinate functions"? How would one calculate the commutator in this case?
The vector fields he is referring to are those of the form $\partial_{\mu_0}$ for fixed $\mu_0$. Or, in the notation $X = X^\mu \hat e _{(\mu)}$, that would be the vector field $X$ such that $X^\mu=1$ for $\mu=\mu_0$ and $X^\mu=0$ for $\mu\neq\mu_0$. This vector field acts on a given function $f$ by $X(f)=\partial_{\mu_0}f$, so it takes the derivative of $f$ in the direction of the coordinate $\mu_0$. (Note that Carroll's description "partial derivatives of the coordinate functions" is arguably inaccurate; we aren't differentiating the coordinate functions, but rather considering vector fields which are given by partial derivatives in directions corresponding to them.)
Now suppose you have two such vector fields $X=\partial_{\mu_0}$ and $Y=\partial_{\mu_1}$. The commutator when applied to a function $f$ then gives $$[X,Y](f)=\partial_{\mu_0}\partial_{\mu_1}f-\partial_{\mu_1}\partial_{\mu_0}f.$$ But that's $0$, just from the multivariable calculus fact that partial derivatives commute.
It is perhaps more enlightening to see an example where the commutator is not zero, to see what's special about the case above. Let's just consider vector fields on $\mathbb{R}$, where I'll write $\partial_x$ for the ordinary derivative. Consider two vector fields $X=\partial_x$ and $Y=x\partial_x$. That is, to compute $X(f)$ you just take the derivative $f'$ and to compute $Y(f)$ you compute the derivative and then multiply by $x$. We then have $$X(Y(f))=X(xf')=f'+xf''$$ where we get two terms since we needed to use the product rule to differentiate $xf'$. On the other hand, $$Y(X(f))=Y(f')=xf''.$$ So the difference is $$[X,Y](f)=f'=X(f).$$ In other words, $[X,Y]=X$. Here the commutator became nonzero because of the coefficient $x$ that $Y$ had, which made us use the product rule when calculating $X(Y(f))$ but not when calculating $Y(X(f))$. So the point is that if all your vector fields are just partial derivatives with no coefficients to multiply by, this sort of thing doesn't happen and everything commutes.