Determinant of linear transformation

14.1k Views Asked by At

Given a linear transformation $T:V\rightarrow V$ on a finite-dimensional vector space $V$, we define its determinant as $\det([T]_{\mathcal{B}})$, where $[T]_{\mathcal{B}}$ is the (square) matrix representing $T$ with respect to a basis $\mathcal{B}$. It is proven that this does not depend on the particular choice of the basis $\mathcal{B}$.

My question is:

Is there a similar definition of determinant for a linear transformation $T:V\rightarrow W$, where $V,W$ are finite-dimensional vector spaces with the same dimension?

5

There are 5 best solutions below

3
On

You can define it either

a. with respect to two fixed bases $B_1$ of $V$ and $B_2$ of $W$ or

b. with respect to an isomorphism $\varphi : V\to W$.

In the latter case, if $B=\{v_1,\ldots,v_n\}$ is a basis of $V$, then
$\varphi(B)=\{\varphi v_1,\ldots,\varphi v_n\}$ is a basis of $W$, and the determinant is independent of the choice of $B$, provided that $Tu$ is analyzed in terms of $\varphi(B)$.

1
On

Yes there is , I think you should have studied this first.

Let $T: V \to W$ and let $\mathcal{B_2}$ and $\mathcal{B_1}$ be the basis of $V,W$ resp. Notation for that is $det(T)=[T]^{{\mathcal{B_1}}}_{\mathcal{B_2}}$. Simply write basis images of elements of $\mathcal{B_1}$ in terms of $\mathcal{B_2}$, and then make the matrix of coordinates, as you do for $T:V\to W$.

Reference for more details is Linear algebra by friedberg, insel and spence section $2.2$. Here is one important Image...enter image description here

0
On

I originally wrote this as a comment, but now I think it should maybe be an answer, so here goes.

I would argue that no (reasonable) such definition is possible. Admittedly, this is a bold claim, and maybe somebody could produce a definition I would be happy with. But my reason for the claim is that if you do the "natural" thing, i.e. write down a matrix for $T$ with respect to a basis $\mathcal{B}_1$ of $V$ and a basis $\mathcal{B}_2$ of $W$ and then take its determinant, then the answer depends on these choices. Thus what you have defined is not a property of the map $T$.

If you fix an isomorphism $\varphi\colon V\to W$, then you could take the determinant of $(T,\varphi)$ by picking a basis $\mathcal{B}$ for $V$ and taking the determinant of the matrix of $T$ with respect to $\mathcal{B}$ and $\varphi(\mathcal{B})$ as Yiorgos suggests - this doesn't depend on $\mathcal{B}$ for the same reason as in the $V\to V$ case, but it does depend on $\varphi$. In fact, this is essentially what you do in the $V=W$ case, but there there is a canonical choice of $\varphi$, namely the identity map on $V$. For two non-equal vector spaces of the same dimension, there is no such preferred isomorphism.

0
On

Just sharing some thoughts. So the determinant of a map $T: V\rightarrow V$ is the volume of the parallelepiped that is the image of the unit n-cube. (Assume we are talking about everything in Euclidean vector spaces up to an isomorphism and everything here is finite dimensional) Now think of a map $T: V\rightarrow W$ and $\dim{V}<\dim{W}$, the image of a unit cube in $V$ is then an $\dim{V}$-dimensional "sub-"parallelepiped in $W$ thus has volume $0$.

For $\dim{V}>\dim{W}$, there are $\dim{V}$ number of vectors mapped to $W$ thus must be linearly dependent. So if there are exactly $\dim{W}$ number of basis vectors whose image linearly independent and all the other images of basis vectors are $0$, then there is a positive volume. Otherwise, the volume is either zero (the set of images of basis vectors in $V$ does not span $W$) or cannot be defined (the set of images spans $W$ but there are non-zero vectors that is a linear combination of the other vectors, think of $\{(0, 1), (1, 0), (1, 1)\}$ in $\mathbb{R}^2$) because no parallelepiped can be defined by such a set. I would say in this case no determinant can be defined, because the images of basis vectors of $V$ must be linearly dependent, even though in the first case there is possibly a "volume", one cannot just ignore the zero vectors as the images.

So my conclusion: talking about the "determinant" of such a map may not be very interesting. Because it either has "determinant" zero or cannot be defined.

With $\dim{V}=\dim{W}$, I propose that one can write the matrix representation in two orthonormal basis and take the determinant of this matrix?

0
On

$ \newcommand\Ext{{\textstyle\bigwedge}} \newcommand\MVects[1]{\mathop{\textstyle\bigwedge^{\!#1}}} $Messing with bases like in the other answers should be unsatisfactory; you can discover that certain pairs of bases reproduce the same definition of determinant as other pairs, but some do not. Why?

This is very clear if we use the exterior algebras $\Ext V$ and $\Ext W$. Algebraically, the exterior algebra of $V$ is the associative algebra generated by $V$ subject only to the relations $v\wedge v = 0$ for all $v \in V$ (and where the product is traditionally notated with $\wedge$). Geometrically, it is intimately related to the subspaces of $V$, and in particular we can naturally identify $X = v_1\wedge\dotsb\wedge v_k$ with the span of those vectors when they are linearly independent; if $X = 0$ then they linearly dependent. Such a product of vectors is called a $k$-blade, and sums of $k$-blades are called $k$-vectors, the set of which will be denoted $\MVects kV$. The entire algebra is naturally graded on $k$-vectors so that $$ \Ext V = \MVects 0V \oplus \MVects 1V \oplus \dotsb \oplus \MVects nV $$ where $n$ is the dimension of $V$. $\MVects 0V$ is simply the field of scalars of $V$ and $\MVects 1V = V$.

A key property (in fact a defining property) of the exterior algebra is that every compatible linear transformation from $V$ extends uniquely to an algebra homomorphism from $\Ext V$. More precisely, if $A$ is an associative algebra and $f : V \to A$ is linear and satisfies $f(v)^2 = 0$ for all $v$, then $f$ extends uniquely to a homomorphism $\Ext V \to A$. A particularly important case is $A = \Ext V$; since $V \subset \Ext V$, every linear transformation $f : V \to V$ extends to an endomorphism of $\Ext V$ called the outermorphism of $f$. We will use the same symbol for a transformation and its outermorphism. The action of an outermorphism on a blade is intimately related to the action of the underlying linear transformation on the corresponding subspace; indeed $$ f(v_1\wedge\dotsb\wedge v_k) = f(v_1)\wedge\dotsb\wedge f(v_k) $$ so it is just like applying $f$ to each vector of the corresponding subspace.

Now notice that $\MVects nV$ is necessarily a one-dimensional space, corresponding to the fact that $V$ has one $n$-dimensional subspace (itself). In fact, elements of $\MVects nV$ are often called pseudoscalars, and of course all pseudoscalars are blades. What this means, though, is that $f(I)$ must be a multiple of $I$ for any pseudoscalar I, and that scale factor must be fixed. This is the determinant of $f$. In symbols, $$ f(I) = (\det f)I\quad\forall I\in\MVects nV. $$ This corresponds directly to the conception of the determinant as the factor by which $f$ scales volumes.


It is now almost trivial to extend the above to linear transformations $f : V \to W$. Because $V$ and $W$ are different space, we can not use $I$ like in the LHS and RHS of the above, and the is no unique "determinant". Instead, we must independently choose $I \in \MVects nV$ and $J \in \MVects nW$. Then we define the corresponding (I,J)-determinant as the unique scalar such that $$ f(I) = (\det_{I,J}f)J. $$ This shows us that all such determinants are simply fixed multiples of each other, something which is not immediately obvious from the formulation in terms of bases. It also automatically tells us everything about the basis formulation:

  • Let $B_1, B_2$ be orderded bases of $V$ and $\varphi_V$ be the linear map taking $B_1$ to $B_2$ and preserving order. Similarly, let $C_1, C_2$ be ordered bases of $W$ and define $\varphi_W$ analogously. Then the pair $(B_1, C_1)$ induces the same $V\to W$ determinant as $(B_2, C_2)$ if and only if $\det\varphi_V = \det\varphi_W$.

The proof follows simply by producing pseudoscalars from each basis by wedging their vectors in order.