Finding matrix representation of linear transformation with respect to permuted basis

122 Views Asked by At

I'm struggling with understanding how I can figure out exactly what the following matrix representation looks like:

Let $V$ be an $n$-dimensional vector space. Let $\mathcal{B} = (b_1,\ldots,b_n)$ be an ordered basis for $V$. Now fix a permutation $\sigma$ of the set $\{1,\ldots, n\}$. Let $\mathcal{C} = (b_{\sigma(1)},b_{\sigma(2)},\ldots,b_{\sigma(n)})$, and let $T : V \to V$ be such that for each $i$, $T(b_i) = b_{\sigma(i)}$. Give $[T]_{\mathcal{C}}^{\mathcal{B}}$.

I think my main struggle is not quite understanding where I'm stuck. How do I know exactly where the ones and zeroes will be in $[T]_{\mathcal{C}}^{\mathcal{B}}$? Any hints or advice is appreciated. Thanks in advance!

1

There are 1 best solutions below

0
On

$T : V \to V$ is the linear transformation that permutes an input vector, in this case, basis vectors from $B$.

Let $\sigma$ be a permutation of degree $n$, then $T$ is the $n\times n$ permutation matrix associated to $\sigma$, call it $T_{\sigma}$ and is constructed under the rule $T_{(i,j)}=1 \iff \sigma(i)=j$ and $T_{(i,j)}=0 \iff \sigma(i)\neq j$. Thus $T\cdot b_i = b_{\sigma(i)}$