If a linear map has the same matrix representation for any basis, show that for any map $\phi =\lambda i$.

1.5k Views Asked by At

In the book of Linear Algebra by Werner Greub at page 95, question 2,

Assume that $\phi$ is a linear transformation $E\to E$ having the same matrix relative to every basis $x_v$.Prove that $\phi = \lambda i$, where $\lambda $ is scalar, and $i$ is the identity map.

Let $A$ be the matrix representation of $\phi$ respect to basis $x_v$ and $B$ respect to the basis $y_v$, and C be the basis transformation $x_v \to y_v$. I have derived that

$$AC = CA = CB = BC$$, but after that I stuck.

Actually, as a method I don't know how to show the result, so I tried things to get some feeling what is going on, but, as I have said, it didn't go nowhere.

So how can we show this result ? I would appreciated if you give some hint, but if you directly give the answer, it is OK too.

Edit:

We are working on a give $\phi$ such that its matrix representation $M(\phi; x_v, x_u)$ is the same for any basis $x_v$.

4

There are 4 best solutions below

2
On

What happens if you have two basis $(e_1,e_2,\ldots,e_n)$ and $(-e_1,e_2,\ldots,e_n)$?

3
On

The following proof is valid over any field $K$ that has at least $3$ elements.

Assume that $\phi$ is not a scalar function. Then there is $x$ s.t. for every $\alpha\in K\setminus \{0\}$

$x,\alpha\phi(x),e_3,\cdots,e_n$ is a basis of $E$.

In such a basis, the matrix of $\phi$ has as first column: $[0,\dfrac{1}{\alpha},0,\cdots,0]^T$, a contradiction.

EDIT 1. A solution valid over any field $K$.

Let $A=[a_{i,j}]$ be a representative of $\phi$ and let $(E_{i,j})$ be the canonical basis of $M_n(K)$. As Pierre-Yves Gaillard wrote, for every $P\in GL_n(K)$, $P^{-1}AP=A$, that is $PA=AP$.

Method 1. In particular, for every $k\not= l$, $A(I_n+E_{k,l})=(I_n+E_{k,l})A$, that implies for every $k\not= l$, $a_{l,k}=0,a_{k,k}=a_{l,l}$. Finally, $A$ is a scalar matrix.

EDIT 2. Method 2. We can also use the fact that (over any field) any matrix is the sum of two invertible matrices. cf. the user1551's answer in

Real square matrix as a sum of two invertible matrices

6
On

Someone gave this answer when the question is first asked, but for some reason the user has deleted his/her answer though I used that the hint that s/he gave me, so I'm giving again.

Hint:

Consider the basis $x_v$ and its permutations.

Note that, we are dealing with ordered basis.

Solution:

Let for arbitrary $\sigma \& v$, $x_\sigma \& x_v$ be the first basis elements the ordered basis.Then

$$\phi (x_\sigma) = \sum_\lambda \beta_\sigma^\lambda x_\lambda$$

$$\phi (x_v) = \sum_u \alpha_v^u x_u$$, so $\alpha_vû = \beta_\sigma^u \quad \forall u$.Since $v \& \sigma$ was arbitrary, the matrix representation $M(\phi) = (\gamma_{i,j})$, where $\gamma_{i,j} = \alpha$ for all $i,j$.Thus, this directly implies that $\phi(x) = \alpha x \quad \forall x$

EDIT (By loup blanc). The above proof does not hold when $n=2$ and when a representative of $\phi$ is $A=\begin{pmatrix}0&1\\1&0\end{pmatrix}$. Indeed, there is only one non-trivial permutation of the elements of the basis: $\sigma=(1,2)$ and its associated permutation matrix is $A$ !! Finally, by this permutation, $A$ is transformed in $A^{-1}AA=A$; thus, there is no contradiction.

2
On

Let $V$ be a finite dimensional vector space over a field $K$, and let $a$ be an endomorphism of $V$ commuting with all automorphisms.

It suffices to show that $a$ is scalar.

Let $C$ be the set of endomorphisms of $V$ commuting with $a$.

Clearly $C$ is a linear subspace of $\operatorname{End}_K(V)$ containing the automorphisms.

If $b$ is a nilpotent endomorphism, then $\operatorname{id}_V+b$ is an automorphism, and $b=(\operatorname{id}_V+b)-\operatorname{id}_V$ is in $C$.

This implies successively that $C$ contains all the nilpotent endomorphisms, that $a$ preserves the kernel and the image of each nilpotent endomorphism, that $a$ preserves each linear subspace, and that any nonzero vector is an eigenvector (in particular $a$ is diagonalizable). Since the sum of two eigenvectors corresponding to different eigenvalues cannot be an eigenvector, we see that $a$ has exactly one eigenvalue.