Orthogonal complement of S in a finite field $\mathbb F_q$

2.2k Views Asked by At

For the following set $S$ and corresponding finite field $\mathbb F_q$, find the $\mathbb F_q$-linear span $\left<S\right>$ and its orthogonal complement $\left<S\right>$-perp.

$$S = \{(1,0,1), (1,1,1), (0,1,0)\}, q = 2$$

I thought that the dot product with a vector in the perpendicular space had to yield zero, but the only non-trivial solution I can find -- (1,0,1) -- is already in $S$. Am I not understanding the definition of -perp correctly?

1

There are 1 best solutions below

0
On BEST ANSWER

In a vector space $V=F^n$ over a field $F$, it is perfectly possible for a vector to be "orthogonal to itself" in the sense that the usual symmetric bilinear form (= an abstraction of a dot product) $$ B:V\times V\to F, B((x_1,x_2,\ldots,x_n),(y_1,y_2,\ldots,y_n))=\sum_ix_iy_i $$ vanishes, i.e. $B(x,x)=0$ even though $x\neq0$. The case of real numbers, when $F\subset\mathbb{R}$, is the exception rather than the rule! An important exception, make no mistake, but an exception nevertheless :-)

In the case of a field of positive characteristic this will always happen. As you discovered $$ B((1,0,1),(1,0,1))=1+0+1=0 $$ in $V=\mathbb{F}_2^3$.

Some rules familiar from linear algebra still work; some need modifications. As no vector is "orthogonal to the whole space" with respect to this bilinear form, we still get the result that if $U\subset V$ is a $k$-dimensional subspace, then $$ U^\perp=\{x\in V\mid B(x,u)=0\ \text{for all $u\in U$}\} $$ is of dimension $(n-k)$ (as in the real case). However, results like $V=U\oplus U^{\perp}$ that do hold in a vector space over the reals are no longer true. For this reason I am hesitant to call $U^\perp$ "the orthogonal complement of $U$", as to me a complementary subspace $U'$ of $U$ should imply that $U\oplus U'=V$. It is, of course, possible that some authors feel differently. In coding theory, when $U\subseteq \mathbb{F}_2^n$ is a binary linear code, the space $U^\perp$ is often called the dual code. To people with a different background this may also be confusing, because the dual of a vector space more often refers to the space of linear functionals.

I will mention an extreme example case. Some relatively famous error-correcting codes are self-dual. This means that the code $C\subseteq \mathbb{F_2}^n$ is equal to its own dual $C^\perp=C$. In that case every codeword is "orthogonal" to every other codeword, and every non-codeword is non-orthogonal to at least one codeword. For example the extended Hamming code of length $8$ is self-dual - its generator matrix is also its parity check matrix.