INTRODUCTION.
It is a standard and well-known property that for smooth, compactly supported $f, g\colon \mathbb R^d\to \mathbb R$, the euclidean Laplacian $$\tag{1}\Delta=\sum_{j=1}^d \frac{\partial^2}{\partial x_j^2}$$ commutes with the convolution $$ f\ast g(x):=\int_{\mathbb R^d} f(x-y)g(y)\, dy. $$ We can rephrase this in terms of the linear operator $$\tag{2} T_g f:=f\ast g; $$ indeed, the aforementioned property is equivalent to the vanishing of the commutator $$ [T_g, \Delta]=T_g\Delta - \Delta T_g=0. $$
THE QUESTION.
Let us define the spherical Laplacian $\Delta_{\mathbb S^{d-1}}$ via the formula $$ \frac{1}{r^2}\Delta_{\mathbb S^{d-1}}f := \Delta f - \frac{1}{r^{d-1}}\frac{\partial}{\partial r}\left(r^{d-1}\frac{\partial f}{\partial r}\right),$$ where $r=\lvert x \rvert$ denotes the radial coordinate of $\mathbb R^d$. $^{[1]}$
Big Question. Can you compute the commutator $[T_g, \Delta_{\mathbb S^{d-1}}]\ ?$
This question is probably quite ambitious, and I would already be happy with some partial answer. For example,
Baby Question. Can you compute $[T_g, \Delta_{\mathbb S^{d-1}}]$ in the case $$g(x)=\phi(r)H_n(x),$$ where $\phi$ is smooth and compactly supported and $H_n$ is a homogeneous harmonic polynomial of degree $n$? If the general case is too hard, can you compute the commutator for small values of $n$, say $n\in \{0, 1, 2\}$?
SOME THOUGHTS.
If $g$ is radial, then $[T_g, \Delta_{\mathbb S^{d-1}}]=0$. This answers the Baby Question for $n=0$.
To prove this, we note that the radial symmetry of $g$ implies that $$\tag{3} T_g(f\circ R)=(T_g f)\circ R,\qquad \forall R\in SO(d).$$ In turn, this implies the following commutation; $$\tag{4} [T_g, Z_{ij}]=0,\qquad \text{where } Z_{ij}:=x_i\partial_{x_j}-x_j\partial_{x_i}, $$ and $1\le i<j\le n$. To prove (4), we observe that the operators $Z_{ij}$ are infinitesimal rotations in the following sense. We let $$ R_{12}(\theta):=\begin{bmatrix} \cos \theta & -\sin\theta & 0 & 0 & \ldots & 0 \\ \sin \theta & \cos \theta & 0 & 0 & \ldots & 0 \\ 0 & 0 & 1 & 0 &\ldots & 0 \\ 0 & 0& 0 & 1 & \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots& \ldots \\ 0 & 0 & 0 & \ldots & 0 & 1\end{bmatrix}, $$ observing that $R_{12}(\theta)\in SO(d)$. We have the differential relation $$ \left.\frac{\partial}{\partial \theta}f\left( R_{12}(\theta)x\right) \right|_{\theta=0}= Z_{12} f(x),$$ and similarly for the other operators $Z_{ij}$. So (3) implies (4) by differentiation.
Now, the property (4) implies $[T_g, \Delta_{\mathbb S^{d-1}}]=0$, because $$ \Delta_{\mathbb S^{d-1}}=\sum_{1\le i<j\le d} Z_{ij}^2. $$ (We remark that this last formula is the spherical analogue of (1)).
PERSPECTIVES.
The previous section shows that the knowledge of the commutators (4) gives $[T_g, \Delta_{\mathbb S^{d-1}}]$. Indeed, assuming that $$ [T_g, Z_{ij}]=C_{ij}, $$ it holds that $$ [T_g, \Delta_{\mathbb S^{d-1}}]=\left[T_g, \sum Z_{ij}^2\right]=\sum_{1\le i<j\le d}\left( C_{ij}Z_{ij}^2 + Z_{ij}C_{ij}Z_{ij}\right).$$ Can these $C_{ij}$ be explicitly computed, at least for $g$ of the form $g_0(r)H_n(x)$, as in the Baby Question?
[1] This operator is often known as the Laplace-Beltrami operator.
The problem turned out to have a simpler solution than I expected. It is a consequence of the following proposition.
Main Proposition. For all smooth and compactly supported $f, g$,
$$\tag{1}\Delta_{\mathbb S^{d-1}} (f \ast g) = (\Delta_{\mathbb S^{d-1}} f) \ast g +2 \sum_{1\le i<j\le d} Z_{ij}f\ast Z_{ij}g + f \ast \Delta_{\mathbb S^{d-1}}g.$$
Remark. Essentially, the algebra behind this is the same as the one of the usual formula $$ \frac{d^2}{dx^2}( t(x)s(x))= \frac{d^2 t}{dx^2} s + 2\frac{dt}{dx}\frac{ds}{dx}+t\frac{d^2 s}{dx^2} .$$
Proof of the Main Proposition. As observed in the main question, the spherical derivative $Z_{ij}$ satisfies $$ Z_{ij} (f\ast g)(x)=\lim_{\epsilon\to 0} \frac1{\epsilon} \int_{\mathbb R^d} \left[ f(R_{ij}(\epsilon)x -y)g(y) -f(x-y)g(y)\right]\, dy.$$ We claim that this implies the Leibniz formula
$$\tag{2} Z_{ij}(f\ast g)=(Z_{ij}f)\ast g + f\ast Z_{ij}g. $$ Indeed, $$ \begin{split} &\frac1\epsilon \left(f\ast g(R_{ij}(\epsilon)x)-f\ast g(x)\right) = \frac1\epsilon \int_{\mathbb R^d} f(R_{ij}(\epsilon)x-y)g(y)-f(x-y)g(y)\, dy \\ &=\frac1\epsilon\int_{\mathbb R^d} f(R_{ij}(\epsilon)(x-y))g(R_{ij}(\epsilon)y) - f(x-y)g(y)\,dy \\ &= \frac1\epsilon\int_{\mathbb R^d} f(R_{ij}(\epsilon)(x-y))g(R_{ij}(\epsilon)y) - f(x-y)g(R_{ij}(\epsilon)y) +f(x-y)g(R_{ij}(\epsilon)y) -f(x-y)g(y)\, dy\\ &=\frac1\epsilon\int_{\mathbb R^d} \left( f(R_{ij}(\epsilon)(x-y))-f(x-y)\right)g(R_{ij}(\epsilon)y) + f(x-y)\left( g(R_{ij}(\epsilon)y)-g(y)\right)\, dy \\ &=\frac1\epsilon \left(f(R_{ij}(\epsilon)\cdot) \ast g - f\ast g\right) +\frac1\epsilon \left( f\ast g(R_{ij}(\epsilon)\cdot) -f\ast g\right)\to Z_{ij}f\ast g + f\ast Z_{ij}g. \end{split} $$ Sorry for the long set of equations; it looks terrible, but it actually is only a straightforward adaptation of the proof of the usual Leibniz formula for the one-dimensional derivative $\frac{d}{dx}$.
Once (2) is proved, (1) immediately follows from $$ \Delta_{\mathbb S^{d-1}}=\sum_{1\le i<j\le d} Z_{ij}^2.\quad \Box$$
Now that the Main Proposition is proved, the commutator can be computed as $$\begin{split} [T_g, \Delta_{\mathbb S^{d-1}}]f &=\Delta_{\mathbb S^{d-1}}f\ast g -\Delta_{\mathbb S^{d-1}}(f\ast g) \\ &=-2\sum_{1\le i<j\le d} Z_{ij}f\ast Z_{ij}g -f\ast \Delta_{\mathbb S^{d-1}}g. \end{split} $$ This answers the "Big Question".