In Artin's Algebra, Artin mentions
When we permute the entries of a vector $(x_1,x_2,...,x_n)^t$ according to a permutation $\textit{p}$, the indices are permuted in the opposite way.
For example: $$ PX = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} x_2 \\ x_3 \\ x_1 \end{bmatrix} $$ permutes the indices by $1 \rightsquigarrow 2 \rightsquigarrow 3 \rightsquigarrow 1$, which implies then that the entries are permuted as $ 1 \rightsquigarrow 3 \rightsquigarrow 2 \rightsquigarrow 1$, which makes sense when you compare the indices of the input vector with those of the output vector. The part I'm having trouble understanding is why this inversion occurs.
When I consider the action of the permutation on the entries, I interpret it to be $p(x_1) = x_2 \implies x_1 \rightsquigarrow x_2$, $ \ p(x_2) = x_3 \implies x_2 \rightsquigarrow x_3$, and $ \ p(x_3) = x_1 \implies x_3 \rightsquigarrow x_1$, which is equivalent to the permutation of the indices.
Is there an intuitive reasoning? I have a feeling that it has to do with one of them involving imagining moving one from one spot to another (i.e. 'picking' up the entry we begin looking at and 'carrying' it to a new position) while the other involves fixing the entry under consideration and bringing what entry is to replace it, to it (i.e. fixing the entry we begin looking at and 'carrying' the one that will replace it to it).