Let $E$ a vector space of dimension $n$ over a field $\Gamma$ and consider the space $L(E;E)$ of linear transformations.
Assume that $F$ is a function from $L(E;E)$ to $\Gamma$ satisfying $F(\phi \circ \psi )= F(\phi)F(\psi)$.
Prove that $F$ can be written in the form $F(\phi) = f(\det \phi)$ where $f: \Gamma \to \Gamma $ is a mapping such that $f(\lambda \mu) = f(\lambda)f(\mu)$.
The suggestion is: let $\{e_{i}\}$ with $i=1,\dots ,n$ be a basis for $E$ and define the transformations $\psi_{ij}$ and $\varphi_{i}$ by:
$\psi_{ij}(e_{v}) = e_{v}$ if $v \neq i$ or $\psi_{ij}(e_{v}) = e_{i} + \lambda e_{j}$ if $v = i$ where $i,j = 1, \dots , n$.
And $\varphi_{i}(e_{v}) = e_{v}$ if $v \neq i$ or $\varphi_{i}(e_{v}) = \lambda e_{i}$ if $v=i$, where $i = 1, \dots,n$.
The idea is first prove that $F(\psi_{ij})= 1$ and $F(\varphi_{i})$ is independent of $i$ but i don't know how. I tried to do some things with the base, but it came to nothing, I also tried to prove that it $\psi_{ij}$ is its own inverse and then apply the property of $F$, i tried to use representation matrix but this doesn't help.
Any suggestion please? Thanks for your help! Greetings from Colombia
This is just some of my thought.
Since the dimension is finite, we could represent $\phi$ and $\varphi$ as matrices $A$ and $B$ and consider $F$ as a matrix function mapping from $\Gamma^{n^2}$ to $\Gamma$. Let $I$ be the identity matrix and $O$ be the zero matrix.
Since $F(A) = F(AI) = F(A)F(I)$, we have $F(I)=1$.
Since $F(OA) = F(O)F(A) = F(O)$ for any $A$, the nontrivial case would be $F(O)=0$. (Otherwise $F$ would be a constant function).
Consider an invertible matrix $M$, and assuming $F(M)$ $\neq$ 0. We then have $F(MM^{-1}) = F(M)F(M^{-1}) = 1$, and hence $1/F(M) =F(M^{-1})$.
Now, consider the Jordan decomposition of a matrix $M = PJP^{-1}$ and assume $F(P) \neq 0$. We have $F(M) = F(P)F(J)F(P^{-1}) = F(J)$, so the function $F$ only depends on the Jordan form of the matrix.
Let $D_J$ be the diagonal matrix consisted by only the diagonal elements of $J$. Since Jordan blocks are nilpotent, we have which and by $F(J^k) = F(D_J^k) = F(J)^k = F(D_J)^k$ for $k>n$, and hence $F$ only depends on the diagonal elements of the Jordan form.
Consider diagonal matrices represented by $D=diag(d_1,...,d_n)$. Let $D_1 = diag(d_1,1,...,1)$, $D_2 = diag(1,d_2,1,...,1)$ and so on. Note that $D = D_1 D_2...D_n$, and hence $F(D) = F(D_1)F(D_2)...F(D_n) \equiv f(det(D))$.
For any matrix $M=PJP^{-1}$, we have $F(M)=F(J)=F(D_J)=f(det(D_J))$. Also we know that $det(M) = det(J) = det(D_J)$. Hence, $F(M)= f(det(M))$.
$f(pq) = f(det(pqI))= F(pqI) = F(pI)F(qI) = f(det(pI))f(det(qI)) = f(p)f(q)$.
There are some issue when we assume $F(P)\neq 0$. If for an invertible $P$ we have $F(P) = 0$, then $F(P^{-1})$ is not defined (alternatively, if 0 has an inverse in this field $\Gamma$, we have $0=00^{-1}=1$ and the field structure is trivial), so we may assume $F(P)\neq 0$.