Let $\alpha$ be a scalar function on a smooth and closed surface $\Gamma$. $\mathbf{n}$ is the unit vector normal to the surface $\Gamma$.
The surface gradient operator as $\nabla_\Gamma = \nabla - \mathbf{n(n} \cdot \nabla)$.
The surface curl is defined as $\nabla_\Gamma \times [] = [ \nabla_\Gamma \cdot (\mathbf{n} \times []) ] \mathbf{n} $, according to https://doi.org/10.1007/s10444-018-9587-7
As you may know, the curl of a gradient is zero by definition. I am wondering if $\nabla_\Gamma \times ( \nabla_\Gamma \alpha)$ also has the same property. I mean if $\nabla_\Gamma \times ( \nabla_\Gamma \alpha) = 0$ (?)
After using the definition of the surface operators, we get that
$\nabla_\Gamma \times ( \mathbf{ \nabla_\Gamma \alpha)} = [\nabla_\Gamma \cdot ( \mathbf{ n \times \nabla_\Gamma \alpha)] n}$
I am inclined to use the identity $\nabla \cdot (\mathbf{A \times B) = (\nabla \times A)\cdot B - (\nabla \times B)\cdot A}$, which is true for 3D vectors, but I do not know if it is also true for vectors tangent to a surface. If it is applicable, then we get
$\nabla_\Gamma \times ( \mathbf{ \nabla_\Gamma \alpha)} = [\nabla_\Gamma \cdot ( \mathbf{ n \times \nabla_\Gamma \alpha)] n} = \mathbf{ [(\nabla_\Gamma \times n)\cdot \nabla_\Gamma \alpha - (\nabla_\Gamma \times \nabla_\Gamma \alpha)\cdot n]n}$
We can apply the property $(\nabla_\Gamma \times n) = 0$ since this applies to any surface
$ = \mathbf{ \Big ( \{[ 0 ] n \}\cdot \nabla_\Gamma \alpha ) \Big ) n - \{ [\nabla_\Gamma \cdot ( n \times \nabla_\Gamma \alpha)] n \} \cdot n ] n } \\ = 0 \mathbf{ - [\nabla_\Gamma \cdot ( n \times \nabla_\Gamma \alpha)] n } \\ = \mathbf{ - [\nabla_\Gamma \times ( \nabla_\Gamma \alpha) ] } $
$\rightarrow \nabla_\Gamma \times ( \mathbf{ \nabla_\Gamma \alpha)} = 0$
Any insight on this would be greatly appreciated. Thanks in advance!
This is valid. Another way of writing the surface operator is $$ \nabla_\Gamma = P_x(\nabla) := \sum_iP_x(e_i)\partial_i. $$ Here $P_x$ is the projection onto the tangent space of $\Gamma$ at the point $x$, and $x$ is the variable $\nabla$ is implicitly differentiating; however the $x$ in $P_x$ is not differentiated, as you can see from the expansion in the standard basis $e_i$. From here on I will suppress the $x$ in $P_x$ and just write $P$. Note that $P(y) = y - n(n\cdot y)$; this is why we have your expression for the surface gradient, and can similarly derive the expression for $\nabla_\Gamma\times$.
Your triple product identity relies on very little, really only the fact that $\nabla$ is a form of differentiation. The following is a proof: $$ \nabla\cdot(A\times B) \overset1= \dot\nabla\cdot(\dot A\times B) + \dot\nabla\cdot(A\times\dot B) \overset2= B\cdot(\dot\nabla\times A) + A\cdot(\dot B\times\dot\nabla) \overset3= B\cdot(\nabla\times A) - A\cdot(\nabla\times B). $$ $\nabla$ is usually given a differentiate-to-the-right convention; in (1) we suppress this and instead have $\dot\nabla$ differentiate only the dotted variable. This notation allows us an extremely general "product rule". Then in (2) we use the cyclic property of the triple product. Finally in (3) we anticommute the second cross product and reinstate the differentiate-to-the-right convention. I would urge you to follow the above manipulations in components $\nabla = \sum_ie_i\partial_i$; it should make it clear why the above is valid.
Now this proof applies directly without modification to $\nabla_\Gamma = P(\nabla)$: $$ P(\nabla)\cdot(A\times B) \overset1= P(\dot\nabla)\cdot(\dot A\times B) + P(\dot\nabla)\cdot(A\times\dot B) \overset2= B\cdot(P(\dot\nabla)\times A) + A\cdot(\dot B\times P(\dot\nabla)) \overset3= B\cdot(P(\nabla)\times A) - A\cdot(P(\nabla)\times B). $$