When the characteristic of the field is not two, we can always find an orthogonal basis.

473 Views Asked by At

Let $ V $ be an $n$-dimensional vector space over the finite field $\mathbb F_q$, with $ \operatorname{Char}\mathbb F_q\ne 2 $. Show that for every symmetric bilinear form $B(\cdot,\cdot)$ on $V$, there exists an orthogonal basis in the sense that a basis $C=\{e_1,...,e_n\}$ is orthogonal with respect to $B$ if and only if : $$B(e_i,e_j)=0\ \forall i\ne j.$$

The above is mentioned in Wikipedia under the entry of Symmetric bilinear form, but I cannot find a proof of it. I know that the above statement is obvious when the field is $\mathbb R$ and we can apply the Gram-Schmidt process.

My questions are:

Why is this also true for $ \mathbb F_q $ when $\operatorname{Char}\mathbb F_q\ne 2$? Does the Gram-Schmidt process fail in this case, and why? And why do we need $ \operatorname{Char}\mathbb F_q\ne 2 $? Is there any counterexample when $ \operatorname{Char}\mathbb F_q=2 $?


Edit: Thanks to the post below, I realize we cannot apply the Gram-Schmidt process to $ \mathbb R^n $ in general unless we assume additionally that $ B $ is positive definite, in which case we will get a Euclidean space.

1

There are 1 best solutions below

0
On BEST ANSWER

Gram-Schmidt doesn't always work here, not even in $\mathbb R^n$.

Presumably you mean to start with a basis $a_1, a_2, \dots, a_n$ and orthogonalize it. Then we set $e_1 = a_1$, and $$e_2 = a_2 - \frac{B(e_1, a_2)}{B(e_1, e_1)} e_1, $$and... this step can already go wrong. We didn't assume $B$ was positive definite, so it's plausible that $B(e_1, e_1) = B(a_1, a_1) = 0$.

The inductive argument is in fact to start by avoiding this case. Let $v$ be an element of $V$ such that $B(v,v) \ne 0$ and let $W$ be the orthogonal complement of $\text{span}(v)$: the set of all $w \in V$ such that $B(v,w) = 0$. Then (precisely because $v \notin W$) we can decompose $V = \text{span}\{v\} \oplus W$, get an orthogonal basis for $W$ by induction, and then add on $v$.

It is the first step, finding $v$ such that $B(v,v) \ne 0$, that fails in characteristic $2$. For example, suppose that we are working in $\mathbb F_2^2$ and $$ B(v,w) = v_1w_2 + v_2 w_1 = \begin{bmatrix}v_1 & v_2\end{bmatrix} \begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix} \begin{bmatrix}w_1 \\ w_2\end{bmatrix}. $$ Then $B(v,v) = 2v_1v_2 = 0$ for all $v \in \mathbb F_2^2$ (well, all four of them) and so the proof doesn't work. Moreover, there is no orthogonal basis here: each $v \in \mathbb F_2^2$ is only orthogonal to $0$ and to itself. (More generally, $B$ defined in the same way is a counterexample over any field of characteristic $2$.)

In any field of characteristic other than $2$, we can use the polarization identity $$ B(x+y,x+y) - B(x,x) - B(y,y) = 2B(x,y) $$ to find a $v$ with $B(v,v) \ne 0$. (This identity can be shown just by linearity of $B$.) Start with $x,y$ such that $B(x,y) \ne 0$; then $2B(x,y) \ne 0$, and so at least one of the terms of the left-hand side is nonzero. So we can either take $v=x$, or $v=y$, or $v=x+y$ to have $B(v,v) \ne 0$.

Or, if there are no $x,y$ such that $B(x,y) \ne 0$, then $B$ is trivial and any basis is orthogonal.