I might have worded the question incorrectly in the title, but I am trying to show that $\nabla^2(\nabla\times \vec A) = \nabla \times(\nabla^2 \vec A)$.
I am not sure if there is any physical significance to this statement, I think it is just a practice for index notation. I think I am close to an answer, but I am not sure what I am missing. Before I show my attempt I want to make one thing clear
I use commas to denote differentiation. For example I would write $(\nabla \cdot \vec A) $ as $\vec A_i,_j$
This is my attempt:
$\nabla^2(\nabla\times \vec A)$
$\nabla^2(\vec A_{i,j} \epsilon_{kij})\hat e_k$
($\vec A_{i,j,l,l,}\epsilon_{kij})\hat e_k $
At this point I want to rewrite $\vec A_{i,j,l,l,}$ as $\vec A_{i,l,l,j}$ because then I can finish the question. However, I am not sure how to justify this? I know there is equality of mixed second order partials if certain conditions are met. Is there a more general version I can use? Is there a different approach that works better? For some reason I feel like both sides might equal zero, but that's just a gut feeling. Any guidance would be appreciated.
Main result
We define the vector Laplacian by $${\nabla}^2 \mathbf A= \boldsymbol{\nabla} (\boldsymbol \nabla \boldsymbol \cdot \mathbf A) - \boldsymbol \nabla \times (\boldsymbol \nabla \times \mathbf A).$$ Then, \begin{align*} \nabla^2(\boldsymbol \nabla \times \mathbf A) =& \boldsymbol \nabla\Big(\boldsymbol \nabla \boldsymbol \cdot (\boldsymbol \nabla \times \mathbf A)\Big) - \boldsymbol \nabla \times \Big(\boldsymbol \nabla \times (\boldsymbol \nabla \times \mathbf A)\Big) \\ &= - \boldsymbol \nabla \times \Big(\boldsymbol \nabla \times (\boldsymbol \nabla \times \mathbf A)\Big), \end{align*} since the divergence of a curl is $0$. Also, \begin{align*} \boldsymbol \nabla \times (\nabla^2 \mathbf A) &= \boldsymbol \nabla \times \Big( \boldsymbol \nabla(\boldsymbol \nabla \boldsymbol \cdot \mathbf A) - \boldsymbol \nabla \times (\boldsymbol \nabla \times \mathbf A)\Big) \\ &= - \boldsymbol \nabla \times \Big(\boldsymbol \nabla \times (\boldsymbol \nabla \times \mathbf A)\Big), \end{align*} since the curl of a gradient is $\mathbf 0$.
Thus the proposition is shown.
Additional proofs
The divergence of the curl of $\mathbf A$ is \begin{alignat*}{2} \boldsymbol \nabla \boldsymbol \cdot (\boldsymbol \nabla \times \mathbf A) &= \partial_k \varepsilon_{ijk} \partial_i A_j \\ &= -\varepsilon_{kji} \partial_k \partial_i A_j && \quad\text{since } \varepsilon_{ijk} \text{ is antisymmetric}\\ &= -\varepsilon_{kji} \partial_i \partial_k A_j && \quad \text{since mixed partial derivatives commute}\\ &= -\varepsilon_{ijk} \partial_k \partial_i A_j && \quad \text{renaming dummy variables } i \leftrightarrow k. \end{alignat*} Thus the divergence of a curl is $0$. (This is overall because the Levi-Civita tensor is antisymmetric while the partial derivatives are symmetric).
The curl of the gradient of $\phi$ is \begin{alignat*}{2} \boldsymbol \nabla \times (\boldsymbol \nabla \phi) &= \varepsilon_{ijk} \partial_i \left[\boldsymbol \nabla \phi\right]_j \mathbf e_k \\ &= \varepsilon_{ijk} \partial_i \partial_j \phi \mathbf e_k. \end{alignat*} The result follows similarly to above, since $\varepsilon_{ijk}$ is antisymmetric while the partial derivatives are symmetric.