Isogonal operator is the product of an orthogonal map with a homothety

223 Views Asked by At

Exercise $31$ in chapter $8$ of Shilov's Linear Algebra book states that if $A\colon \Bbb R^n \to \Bbb R^n$ is an isogonal operator (that is, $\langle x,y\rangle = 0 \implies \langle Ax,Ay\rangle = 0$, usual inner product), then $A$ is the product of an orthogonal map with a homothety.

He gives the following hint in the end: $A$ transforms the standard orthonormal basis $\{e_1,\cdots,e_n\}$ into an orthogonal basis $\{f_1'=\alpha_1f_1,\cdots,f_n'=\alpha_nf_n\}$, where each $f_i$ is an unit vector. Let $Q$ take $f_i$ into $e_i$ - so that $Q$ is orthogonal. Thus $QA$ is diagonal and isogonal. If $\alpha_i \neq \alpha_j$, construct a pair of orthogonal vectors which are carried into non-orthogonal vectors via $QA$.

I have a few problems with that sketch.

  • He says that a homothety is of the form $Tx = \lambda x$ for all $x$ but does not exclude the possibility of $\lambda = 0$. If $A = 0$ the problem is trivial, but I could not prove that $A \neq 0$ being isogonal implies $A$ injective. Meaning I don't really know that the $\{f_i\}$ are independent. In other words, I don't know why each $\alpha_i$ is non-zero.

  • When he says that $\alpha_i \neq \alpha_j$ enables us to construct a "bad" pair, the obvious choice is to take the diagonals: clearly $\langle e_i+e_j,e_i-e_j\rangle=0$, but $\langle QA(e_i+e_j),QA(e_i-e_j)\rangle = \alpha_i^2-\alpha_j^2$ still could be zero if $\alpha_i =- \alpha_j$.

Lastly, I was curious to see if this result is valid for a pseudo-euclidean product in $\Bbb R^n_\nu$, say. I think lightlike vectors would screw everything, but perhaps this discussion is better suited for another question.

Can someone help me dot the i's and cross the t's here? Thanks.

2

There are 2 best solutions below

6
On BEST ANSWER

Let me suggest an alternative derivation which, at the end, boils down to what has been suggested. If $A$ is a product of a homothety by a factor $\lambda$ and an orthogonal operator $O$, we can write $A = \lambda O$. Then

$$ A^TA = (\lambda O)^T(\lambda O) = \lambda^2 O^T O = \lambda^2 I. $$

This suggests that given an isogonal operator $A$, by looking at $A^TA$, we should be able to extract the homothety factor $\lambda$ and then the orthogonal matrix $O$ so let us try to do that. Define $B = A^TA$ and let $x, y \in \mathbb{R}^n$ with $x \perp y$. Then $\left< x, y \right> = 0$ and so

$$ \left< Bx, y \right> = \left< A^TAx, y \right> = \left< Ax, Ay \right> = 0 $$

which shows that $Bx \in \left( \operatorname{span} \{ x \}^{\perp} \right)^{\perp} = \operatorname{span} \{ x \}$. This means that each vector $x \in \mathbb{R}^n$ is an eigenvector of $B$ (right now, possibly with a different eigenvalue). Now, if $x_1, x_2$ are linearly independent, write $Bx_i = \lambda_i x_i$. Then

$$ B(x_1 + x_2) = \lambda_1 x_1 + \lambda_2 x_2 = \mu (x_1 + x_2) $$

and since the $x_i$ are linearly independent, we see that we must have $\lambda_1 = \lambda_2 = \mu$ which shows that in fact, $B = \mu I$ for some $\mu \in \mathbb{R}$.

Now, note that $B = A^T A$ is a positive semi-definite operator and so $\mu \geq 0$. Set $\lambda = \sqrt{\mu}$. If $\lambda = 0$ then $A^T A = 0$ and so $A = 0$. If $\lambda > 0$, we can define $O = \frac{A}{\lambda}$ and then

$$ O^T O = \frac{1}{\lambda^2} A^T A = \frac{1}{\mu} \mu I = I$$

showing that $O$ is orthogonal and $A = \lambda O$.


Regarding the more general question, let $g \colon \mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}$ be a symmetric bilinear form and let $A \colon \mathbb{R}^n \rightarrow \mathbb{R}^n$ be an operator such that if $g(x,y) = 0$ then $g(Ax, Ay) = 0$. Note that if $g$ is indefinite (and in particular $n \geq 2$), we can always find a one-dimensional subspace $V \subseteq \mathbb{R}^n$ such that $g(x,x) = 0$ for all $x \in V$. But then any linear map $A$ that maps $\mathbb{R}^n$ to $V$ will satisfy the property above trivially but it won't be of the form $\lambda O$ for a $g$- isometry $O$ because $\lambda O$ is either $0$ or has full rank.

0
On

Just for kicks:

  • if $A \neq 0$, then $A$ is injective. If not, take $v_1 \neq 0$ (which we can assume that has length $1$) such that $Av_1 = 0$ and complete $\{v_1\}$ to an orthonormal basis $\{v_1,\cdots,v_n\}$. If $A \neq 0$ there is $i$ such that $Av_i \neq 0$. Well, $\langle v_1+v_i,v_1-v_i\rangle = 0$ but $\langle A(v_1+v_i),A(v_1-v_i)\rangle = -\|Av_i\|^2 \neq 0$, contradicting that $A$ is isogonal.

  • We can actually assume that $\alpha_i > 0$ for all $i$, otherwise we substitute $f_i$ by $-f_i$, and if $\{f_1,\cdots,f_n\}$ is orthonormal, so is $\{ \pm f_1,\cdots ,\pm f_n\}$ for all possibilities of signs. I should have seen that.

And if we throw in the extra hypothesis that $A$ is injective, the result is true in $\Bbb R^n_\nu$ as well - the same proof works, we only have to pay more attention in the part we prove that $\alpha_i = \alpha_j$ for all $i$ an $j$ - breaking it in three cases: if both $e_i$ and $e_j$ are spacelike or timelike, and if one is spacelike and the other is timelike.