From linear transformation to alternating linear transformation

420 Views Asked by At

I'm reading Kenneth Hoffman's Linear Algebra, Ed2.

In $\S5.6$ "Multilinear Functions" it talks about generating an alternating linear transformation from a linear transformation.

The collection of all multilinear functions on $V$ will be denoted by $M^r(V)$.

Definition. Let $L$ be an $r$-linear form on a $K$-module $V$. We say that $L$ is alternating if $L(\alpha_1, \dots, \alpha_r) = 0$ whenever $\alpha_i = \alpha_j$ with $i \ne j$. We denote by $\Lambda^r(V)$ the collection of all alternating $r$-linear forms on $V$.

There is a general method for associating an alternating form with a multilinear form. If $L$ is an $r$-linear form on a module $V$ and if $\sigma$ is a permutation of $\{1, . . . , r\}$, we obtain another $r$-linear function $L_\sigma$, by defining $$L\sigma(\alpha_1, \dots, \alpha_r) = L(\alpha_{\sigma 1}, \dots, \alpha_{\sigma_r}).$$ If $L$ happens to be alternating, then $L\sigma = (\text{sgn }\sigma)L$. Now, for each $L$ in $M^r(V)$ we define a function $\pi_r L$ in $M^r( V)$ by (5-35) $$\pi_r L = \sum_\sigma(\text{sgn } \sigma) L_\sigma$$ that is, (5-36) $$(\pi_r L) (\alpha_1, \dots, \alpha_r) = \sum_\sigma(\text{sgn } \sigma) L(\alpha_{\sigma 1}, \dots, \alpha_{\sigma r}) $$

Lemma. $\pi_r$ is a linear transformation from $M^r(V)$ into $\Lambda^r(V)$. If $L$ is in $\Lambda^r(V)$ then $\pi_r L = r! L$.

Proof. Let $\tau$ be any permutation of $\{1, . . . , r\}$. Then $$(\pi_r L)(\alpha_{\tau 1}, \dots, \alpha_{\tau r}) = \sum_\sigma(\text{sgn } \sigma) L(\alpha_{\tau \sigma 1}, \dots, \alpha_{\tau \sigma r}) = (\text{sgn } \tau) \sum_\sigma(\text{sgn }\tau \sigma) L(\alpha_{\tau \sigma 1}, \dots, \alpha_{\tau \sigma r}).$$

As $\sigma$ runs (once) over all permutations of $\{1, . . . , r\}$, so does $\tau \sigma$. Therefore, $$(\pi_r L)(\alpha_{\tau 1}, \dots, \alpha_{\tau r}) = (\text{sgn } \tau) (\pi_r L) (\alpha_1, \dots, \alpha_r).$$

Thus $\pi_r L$ is an alternating form.

If $L$ is in $\Lambda^r(V)$, then $L(\alpha_{\sigma 1}, \dots, \alpha_{\sigma r}) = (\text{sgn } \sigma)L(\alpha_1, \dots, \alpha_r)$ for each $\sigma$, hence $\pi_r L = r!L$.

These all look quite ok, but, I'm confused at an example:

Suppose I define $n=r=2$, and define $L$ as: $$L(\alpha_1, \alpha_1) = 1, \quad L(\alpha_1, \alpha_2) = 2, \quad L(\alpha_2, \alpha_1) = 3, \quad L(\alpha_2, \alpha_2) = 4 $$ So $L$ could be interpreted by a matrix $\begin{pmatrix} 1 & 2 \\ 3 & 4 \\ \end{pmatrix}$.

Then there are only two $\sigma$s: $\sigma_A = (1), \quad \sigma_B = (1,2)$. $L_{\sigma_A} = L$, and $L_{\sigma_B} = \begin{pmatrix} 4 & 3 \\ 2 & 1 \\ \end{pmatrix}$ as $L_{\sigma_B}(\alpha_1, \alpha_1) = L(\alpha_{\sigma_B 1}, \alpha_{\sigma_B 1}) = L(\alpha_2, \alpha_2) = 4$, $L_{\sigma_B}(\alpha_1, \alpha_2) = L(\alpha_{\sigma_B 1}, \alpha_{\sigma_B 2}) = L(\alpha_2, \alpha_1) = 3$, $L_{\sigma_B}(\alpha_2, \alpha_1) = L(\alpha_{\sigma_B 2}, \alpha_{\sigma_B 1}) = L(\alpha_1, \alpha_2) = 2$, $L_{\sigma_B}(\alpha_2, \alpha_2) = L(\alpha_{\sigma_B 2}, \alpha_{\sigma_B 2}) = L(\alpha_1, \alpha_1) = 1$.

Then $$\pi_2 L = \sum_\sigma (\text{sgn }\sigma) L_\sigma = (\text{sgn } \sigma_A) L_{\sigma_A} + (\text{sgn } \sigma_B) L_{\sigma_B} = L - L_{\sigma_B} = \begin{pmatrix} -3 & -1 \\ 1 & 3 \\ \end{pmatrix}.$$

However, if $\pi_2 L$ is alternating, by definition it requires $$(\pi_2L)(\alpha_1, \alpha_1) = 0.$$

The two conflict!

Now I'm lost here, why such two conflict, did I misunderstand on some topics?

1

There are 1 best solutions below

1
On BEST ANSWER

In the expression $L\sigma(\alpha_1,\ldots,\alpha_r) = L(\alpha_{\sigma1},\ldots,\alpha_{\sigma r})$, the $\alpha_1,\ldots,\alpha_r$ are denoting entires of the input and not specific basis vectors as you use them in your example. In particular, the right-hand side $L(\alpha_{\sigma1},\ldots,\alpha_{\sigma r})$ still uses the same inputs you originally had, only their order has been permuted. So, when $n=r=2$ you get $$L_{\sigma_B}(\alpha_1,\alpha_2) = L_(\alpha_2,\alpha_1),$$ which means that to get the value of $L_{\sigma_B}$ on some pair of inputs, all you do is evaluate $L$ on the same inputs only with their order switched. In your example, the $(1,1)$-entry in the corresponding matrix for $L$ comes from evaluating $L$ on a pair $(v,v)$, where $v$ is your first basis vector (do NOT call this $\alpha_1$), so that in the notation above $\alpha_1=v$ and $\alpha_2=v$, and thus $$L_{\sigma_B}(v,v) = L(v,v) = 1$$ is the $(1,1)$-entry in the matrix for $L_{\sigma_B}$. Similarly, the $(2,2)$-entry comes from evaluating at $(w,w)$ where $w$ (not $\alpha_2$) is your second basis vector, and switching the order of these two inputs still gives $(w,w)$.