Let $V$ be a vector space of dimension $n$ over some field. Let $Alt_k(V)$ be the set of $k$-multilinear alternate maps.
My question boils down to this, I think (although I will provide more context later): it possible to prove that for and $T \in Alt_n(V) \setminus \{ 0 \}$, if $\{x_1, \ldots, x_n \}$ is linearly independent, then $T(x_1, \ldots, x_n) \neq 0$, without recourse to the property that the usual determinant $\det(A) \neq 0$ if and only if $A$ is invertible?
If we can make recourse to this property, the proof goes as follows: Since $T \neq 0$, $\exists v_1, \ldots, v_n$ with $T(v_1, \ldots, v_n) \neq 0$. Then $B = \{ v_1, \ldots, v_n \}$ is linearly independent, so a basis. Let $B' = \{ x_1, \ldots, x_n \}$ be another basis and $P$ the change of basis matrix from $B$ to $B'$ i.e. ${x_i = \sum_{j_i=1}^n p_{j_i,i} v_{j_i}}$. Then: \begin{align*} T(x_1, \ldots, x_n) &= \sum_{j_1, \ldots, j_n = 1}^n p_{j_1, 1} p_{j_2, 2} \cdots p_{j_n, n} T(v_{j_1}, v_{j_2}, \ldots, v_{j_n}) \\ &= \sum_{\sigma \in S_n} p_{\sigma(1), 1} \cdots p_{\sigma(n), n} sgn(\sigma) T(v_1, \ldots, v_n) \\ &= \det(P) T(v_1, \ldots, v_n) \neq 0 \end{align*}
The reason I would like to not use the usual definition of the determinant is that I would like to define the determinant in a basis-independent way, but as far as I can see I need to use the above property of alternate $n$-multilinear maps. It goes as follows:
I am defining the determinant of a linear map $f: V \rightarrow V$ in the following way. Knowing that $\dim Alt_n(V) = 1$, we must have that $f^* T = c(f,T) T$ for any $T \in Alt_n(V)$, where $c(f,T)$ is a constant and $f^*T(x_1, \ldots, x_n) = T(f(x_1), \ldots, f(x_n))$ by definition.
Now, using the claim that $c(f,T)$ is in fact independent of the alternate map $T \in Alt_n(V)$, I can define the determinant as $\det(f) = c(f,T)$.
The problem is that I can't see how to prove this claim without recourse to a basis and the "usual" determinant and the property that a matrix $A$ is invertible if and only if $\det A = 0$. If I assume these, the proof is easy. If $f$ is not invertible, then $f$ takes any basis to a linearly dependent set, so then $f^* T = 0$ for any $T \in Alt_n(V)$.Now assuming $f$ is invertible and taking $B = \{ x_1, \ldots, x_n \}$ a basis in $V$ and $A$ the matrix of $f$ in this basis, we have
\begin{align*} f^*T(x_1, \ldots, x_n) &= \sum_{j_1, \ldots, j_n} a_{j_1, 1} \cdots a_{j_n, n} T(x_{j_1}, \ldots, x_{j_n}) \\ &= \sum_{\sigma \in S_n} a_{\sigma(1), 1} \cdots a_{\sigma(n),n} sgn(\sigma) T(x_1, \ldots, x_n) = \det(A) T(x_1, \ldots, x_n) \end{align*}
The problem is that, to finish this argument, I need to know that $T(x_1, \ldots, x_n) \neq 0$ if $\{x_1, \ldots, x_n \}$ is a basis.
But this feels like cheating, because we are already using the "usual" determinant and the fact that it does not depend on the basis we express a matrix in.
To summarise, my questions are:
- Is it possible to prove that for a nonzero $T \in Alt_n(V)$, $T$ vanishes in some arguments if and only if the arguments are linearly dependent, without a choice of basis and without assuming that the "usual" matrix determinant has the property that $\det(A) \neq 0$ iff $A$ is invertible?
- If the answer to 1. is "no", is it possible to prove that $f^*: Alt_n(V) \rightarrow Alt_n(V)$ is written as $f^*T = c(f) T$ without recourse to the property in point 1.?
Let $V$ be an $n$-dimensional vector space and $\beta=\{x_1,\dots, x_n\}$ linearly-independent (hence a basis). We’ll show that if $T(x_1,\dots, x_n)=0$, then $T=0$. Consider any vectors $v_1,\dots, v_n\in V$. Then, we can write them as $v_j=\sum_{i=1}^nc_{ij}x_i$. Then, \begin{align} T(v_1,\dots, v_n)&=\det (c_{ij})\,T(x_1,\dots, x_n)=0, \end{align} which shows $T=0$ identically, and thus proves the contrapositive of what you asked for. Now, this may seem like cheating, especially with my second equal sign above, but really the fact that the prefactor turns out to be the determinant is irrelevant here. What is important is that $T$ is alternating, so \begin{align} T(v_1,\dots, v_n)&=T\left(\sum_{i_1=1}^nc_{i_1,1}x_{i_1},\dots, \sum_{i_n=1}^nc_{i_n,n}x_{i_n}\right)=\sum_{i_1,\dots, i_n=1}^nc_{i_1,1}\cdots c_{i_n,n} T(x_{i_1},\dots, x_{i_n}). \end{align} Now, in this summation, due to the alternating nature of $T$, the only potentially non-zero terms are those for which $(i_1,\dots, i_n)$ consists of all distinct indices, and so by a permutation of the indices, you see that this is all some big multiple of $T(x_1,\dots, x_n)$, but this is $0$, so $T(v_1,\dots, v_n)$ itself is $0$. Note carefully that in this manner, one actually ‘derives’ the (what will turn out to be) determinant formula; I am not assuming it. The only thing used above is that $T$ is alternating (this is important, without it, the claim is false).
So, what was shown above is that once a basis $\{x_1,\dots, x_n\}$ is fixed for $V$, then the (obviously linear) mapping $T\mapsto T(x_1,\dots, x_n)$ is injective from $\text{Alt}_n(V)\to\Bbb{F}$, so as a soft corollary we find that in fact the alternating guys are either zero or one-dimensional (with extra work you know the space is actually 1-dimensional, so this mapping is an isomorphism).
Anyway, all of that is irrelevant for actually defining determinants; all you really need to know is that if $\dim V=n$ then $\text{Alt}_n(V)$ is $1$-dimensional, and also the following easy lemma:
This is saying that $\text{id}_X$ forms a basis for $\text{Hom}(X,X)$ (as expected since it is a non-zero element of a 1-dimensional vector space, but nevertheless, here’s a proof below).
To prove this, fix a non-zero vector $x\in X$. Then, $\lambda(x)\in X$ is some vector, so by 1-dimensionality of $X$, we can find some constant $c\in\Bbb{F}$ such that $\lambda(x)= c\cdot x$. For any vector $y\in X$, we can find some $a\in \Bbb{F}$ such that $y=ax$, so $\lambda(y)=\lambda(ax)=a\lambda(x)=acx=cy$, so $\lambda=c\cdot\text{id}_X$ (this shows existence of the constant). To prove uniqueness of the constant, note that if $c_1,c_2$ are two constants such that $\lambda=c_1\text{id}_X=c_2\text{id}_X$, then evaluating on any non-zero vector $x\in X$, we get $c_1x=c_2x$, so by non-zeroness of $x$, we get $c_1=c_2$. This proves uniqueness of the constant, so completes the lemma’s proof.
Finally, we put everything together. Given a linear map $f:V\to V$, we get a corresponding linear map $f^*:\text{Alt}_n(V)\to\text{Alt}_n(V)$, $T\mapsto f^*T$. This is a linear map between $1$-dimensional vector spaces, so by the lemma, there is a unique constant $c_f\in\Bbb{F}$ such that $f^*=c_f\cdot\text{id}_{\text{Alt}_n(V)}$. This unique constant $c_f$ is defined to be $\det f$.
With this definition, we can easily prove that $\det (f\circ g)=\det f\cdot \det g$, because for any linear $f,g:V\to V$ and $T\in \text{Alt}_n(V)$, we have \begin{align} (f\circ g)^*T=g^*(f^*T)=\det g\cdot (f^*T)=\det g\cdot (\det f\cdot T)=(\det f\cdot \det g)\cdot T. \end{align} The first equal sign comes by unwinding definition of pullbacks, while the other equal signs are by the defining property of $\det f,\det g$. Since this is true for all $T$, this shows that the number $\det f\cdot \det g$ satisfies the equality which $\det(f\circ g)$ is supposed to satisfy, so by uniqueness, they have to be equal.
Even more generally, if you had two vector spaces $V,W$ of the same dimension $n$, then a linear map $f:V\to W$ induces a linear map on the top exterior powers $\bigwedge^n(V)\to\bigwedge^n(W)$, and this map is sometimes called $\det f$ (i.e in the fully general case, determinants are not scalars in the field, but rather linear maps between different 1-dimensional vector spaces). In the special case that $V=W$, one can apply the lemma above to write this as some scalar multiple of the identity, but without the assumption $V=W$, you cannot identify the linear map with a scalar (which is why you’ll often hear that you can’t give a basis-independent definition for determinants (as a scalar) of linear maps $V\to W$, but only when $V=W$).
(One final small remark is that the map $\bigwedge^n(V)\to\bigwedge^n(V)$ introduced here will actually be the dual to the map introduced in my previous section, with $f^*:\text{Alt}_n(V)\to\text{Alt}_n(V)$, since $\text{Alt}_n(V)$ is actually $\bigwedge^n(V^*)$, but a-posteriori, determinant of a map and the transpose/dual are the same, so both descriptions are equivalent).