I'm completely new to Einstein's summation notation/index notion and had a go at simplying the following expression: $$\nabla\times(Ax)$$ For a 3x3 matrix A and a vector x.
My attempt:
I know
$$Ax=a_{ij}x_j$$
$$\nabla\times v=\epsilon_{ijk}\partial_jv_k$$
$$\partial_ix_j=\delta_{ij}$$
$$a_{ij}\delta_{jk}=a_{ik}$$
If we define a vector $c=Ax$ then $c_k=a_{kj}x_j$
Therefore
$$\nabla\times(Ax)=\epsilon_{ijk}\partial_j(a_{kl}x_l)$$
since they commute then
$$=\epsilon_{ijk}a_{kl}(\partial_jx_l)$$
By applying the third equation
$$=\epsilon_{ijk}(a_{kl}\delta_{jl})$$
Now apply the forth equation
$$=\epsilon_{ijk}a_{kj}$$
Thats where I got stuck on, if I didn't misunderstood anything, since j and k are the summation index, this term can be futher simplified but I don't know what to do next apart from brute force numeration. If you can lend a hand, thank you!
When you say ‘they commute’, you're effectively assuming that the matrix $ A $ is constant and only the vector $ x $ is varying. I'll assume that that's what you intended.
As @Arthur said in a comment, you can't always simplify indices away. In index notation, the final answer is $ \epsilon _ { i j k } a _ { k j } $; you can't simplify it any further. (Although if you'd like the indices to come in a consistent order, you can use the antisymmetry of $ \epsilon $ to make it $ - \epsilon _ { i k j } a _ { k j } $ or $ \epsilon _ { k i j } a _ { k j } $ and then relabel the indices to $ - \epsilon _ { i j k } a _ { j k } $ or $ \epsilon _ { i j k } a _ { i k } $, in case either of those looks nicer to you.)
Or if you want to translate the answer back into vector/matrix notation, then you can look through the known operations from matrices to vectors and see if you can find one that matches this. In this case, I don't recognize it. But that's the beauty of index notation: you don't have to worry about whether anybody has thought of your operation before; you have the notation for it regardless.
ETA: And for the record, I checked your calculation, and it's correct (assuming that $ A $ is a constant matrix and $ x $ is the vector whose components are the coordinates).
ETA again: I've thought of something that's similar to this operation that takes a matrix (in $ 3 $ dimensions) with components $ a _ { i j } $ to a vector with components $ \epsilon _ { i j k } a _ { j k } $. If you have two vectors with components $ v _ i $ and $ w _ i $, then their dot product is $ v _ i w _ i $ and their cross product has components $ \epsilon _ { i j k } v _ j w _ k $. But there's also the tensor product, a matrix with components $ v _ i w _ j $. This contains all of the information in both the dot and cross products; for example, the dot product is the trace of the tensor product. And how do you get the cross product from the tensor product? It's this mystery operation! So I still don't know what you call it or any symbol for it in vector algebra, but it does come up. (Although note this technicality: the answer to your question is actually the opposite of the operation that takes a tensor product to a cross product, because of the extra minus sign.)