Determinant of a tuple of vectors: is this a thing? If so, where can I learn more?

167 Views Asked by At

Let $k \leq n$ denote a pair of fixed but arbitrary natural numbers.

Definition 0. Write $\varphi$ for the unique $\mathbb{R}$-linear function $$\Lambda^k\mathbb{R}^n \rightarrow \mathbb{R}$$ such that if $e$ is an element of the standard basis for $\Lambda^k\mathbb{R}^n,$ then $\varphi(e) = 1.$

Definition 1. Given a sequence of elements of $\mathbb{R}^n$ with $k$-many terms, call it $x$, define that the determinant of $x$ is given as follows: $$\mathrm{det}(x) = \varphi(x_0 \wedge \ldots \wedge x_{k-1})$$

I imagine that this has the following geometric interpretation: $\mathrm{det}(x)$ is (probably) the signed $k$-area of the $k$-dimensional parallelepiped generated by the terms of $x$. If this actually works as expected, then it gives us a way to find the determinant of a non-square matrix, by regarding the columns of that matrix as vectors. Note, however, that this generalized determinant cannot possibly satisfy the requirement that for all matrices $A$ and $B$ such that the composite $AB$ is well-defined, we have $\mathrm{det}(AB) = \mathrm{det}(A)\mathrm{det}(B).$ See here for a counterexample.

Anyway:

Question. Is this a thing? If so, where can I learn more?

1

There are 1 best solutions below

4
On BEST ANSWER

Note that if $k = 1$ then your linear functional is $\varphi \colon \mathbb{R}^n \rightarrow \mathbb{R}$ given by $\varphi(x^1, \ldots, x^n) = x^1 + \ldots x^n$ and it doesn't give the one-dimensional signed area (length in this case) of $(x^1, \ldots, x^n)$. In general, you can't expect to be able to describe the "signed length" of a vector $x \in \mathbb{R}^n$ by a linear functional $\varphi \colon \mathbb{R}^n \rightarrow \mathbb{R}$ as it has a kernel of dimension $\geq n - 1$.

However, there is a construction that generalizes the (absolute value of the) determinant in some sense and results in the non-signed $k$-area of the $k$-dimensional parallelepiped generated by the vectors $v_1, \ldots, v_k$. Let $V$ be a finite dimensional vector space and endow $V$ with an inner product $\left< \cdot, \cdot \right>$ so that you can talk about lengths of vectors in $V$. The inner product $\left< \cdot, \cdot \right>$ extends naturally to an inner product on $\Lambda^k(V)$ defined on simple $k$-vectors by

$$ \left< v_1 \wedge \dots \wedge v_k, w_1 \wedge \dots \wedge w_k \right> := \det \left( \left< v_i, w_j \right> \right)_{i,j=1}^k $$

and extended multi-linearly. The matrix $G(v_1, \dots, v_k) = ( \left< v_i, v_j \right>)_{i,j=1}^k$ is called the Gram matrix of $(v_1, \dots, v_k)$ and the norm $||v_1 \wedge \dots \wedge v_k|| = \sqrt{\det G(v_1, \dots, v_k)}$ gives the signed $k$-area of the $k$-dimensional parallelepiped generated by the vectors $v_1, \dots, v_k$ and is zero if and only if the vectors $v_1, \dots, v_k$ are linearly dependent (in which case $v_1 \wedge \dots \wedge v_k = 0$).

If $V = \mathbb{R}^n$ with the standard inner product, then if we treat the vectors $v_1, \ldots, v_k$ are columns of the matrix $A \in M_{n \times k}(\mathbb{R})$, then $G(v_1, \ldots, v_k) = A^T A$ and $||v_1 \wedge \dots \wedge v_k||^2 = \det(A^T A)$. In particular, if $k = n$ then $||v_1 \wedge \dots \wedge v_n||^2 = \det(A^T A) = \det(A)^2$ so $||v_1 \wedge \dots \wedge v_n|| = |\det(A)|$.

A construction of different nature generalizing the determinant (including signs) is obtained by the $k$-th exterior product of a linear map $T \colon V \rightarrow W$, resulting in a linear map $\Lambda^k(T) \colon \Lambda^k(V) \rightarrow \Lambda^k(W)$. If $V = W$ and $k = \dim V$, then $\Lambda^n(V)$ is one dimensional and under a choice of orientation, $\Lambda^n(V)$ can be identified with a single scalar called the determinant of $T$. If you apply this to a non-square matrix (interpreted as a linear map) $A \in M_{l \times n}$, then the components of $\Lambda^k(A)$ with respect to bases induced on $\Lambda^k(\mathbb{R}^n)$ and $\Lambda^k(\mathbb{R}^l)$ from the standard bases will be the $k \times k$ minors of $A$. You can learn about it more from the lecture notes of Paul Garrett here.