Matrix of quadratic form (in Serre's general notion)?

104 Views Asked by At

I am currently reading Serre's book on arithmetic. In chapter four (page 27) he defines a general notion of the quadratic form as: Let $V$ be a module of a commutative ring $A$. A function $Q$ is called a quadratic form on $V$ if: \begin{align*} &1) Q(ax)=a^2Q(x) \textrm{ for }a \in A \textrm{ and }x \in V$ \\ &2) \textrm{The function }(x,y) \rightarrow Q(x+y)-Q(x)-Q(y) \textrm{ is a bilinear form } \end{align*} Also he defines a sort of product: \begin{align*} x \cdot y = \frac{1}{2}(Q(x,y)-Q(x)-Q(y)) \end{align*} Then he gives a matrix of a quadratic form, which is the matrix $A=(a_{ij})$ where $a_{ij}=e_i \cdot e_j$, for a basis $(e_i)_{1\leq i \leq n}$ of $V$. Now I don't see how it is easy to see that this matrix is the matrix with respect to $Q$, considering this specially product $\cdot$ that he defined. Can someone explain me how this works?

1

There are 1 best solutions below

2
On BEST ANSWER

The short answer is that these are related in exactly the same way as the "usual" inner product on $\mathbb{R}^n$ and the (identity) matrix corresponding to the form $x_1^2 + \ldots x_n^2$.

I can't recall exactly the assumptions in Serre's book on the module $V$, but let's assume that it's a free module over some unital commutative ring $R$, so that it admits an $R$-basis $\{e_1,\ldots,e_n\}$ and we may identify $V$ with $R^n$. The relationship between a quadratic form $Q$ on $V$ and its matrix $A = A_Q$ is that for $x = (x_1,\ldots,x_n)^t \in R^n$ (identified with $\sum_k x_k e_k$ and with $x^t$ denoting the transpose of $x$) is

$$ Q(x) = x^t A x. $$

If we write $A = (a_{i,j})$ where $a_{i,j} \in R$ for each pair $(i,j)$, then the above equation says that

\begin{align*} Q(x) &= (x_1,\ldots,x_n)\begin{pmatrix}a_{1,1}x_1 + a_{1,2}x_2 + \ldots + a_{1,n}x_n \\ \vdots \\ a_{n,1}x_1 + a_{n,2}x_2 + \ldots + a_{n,n}x_n\end{pmatrix} \\ &= a_{1,1}x_1^2 + a_{1,2}x_1x_2 + \ldots + a_{1,n}x_1x_n \\ &\,+ a_{1,1}x_1x_2 + a_{1,2}x_2^2 + \ldots + a_{1,n}x_2 x_n \\ &\,+ \ldots \\ &\,+ a_{1,1}x_1x_n + a_{1,2}x_2x_n + \ldots + a_{1,n}x_n^2 \\ &= \sum_{i=1}^n\sum_{j=1}^n a_{i,j}x_ix_j. \end{align*}

So, how does this relate to the inner product $x \cdot y = \frac{1}{2}[Q(x+y) - Q(x) - Q(y)]$? For convenience, I'll set $B(x,y) = x\cdot y$. Then we can see from the above derivation that

$$ B(x,y) = \frac 1 2\left[ \sum_{i,j} a_{i,j}(x_i+y_i)(x_j+y_j) - \sum_{i,j} a_{i,j}x_ix_j - \sum_{i,j} a_{i,j}y_iy_j \right], $$

and so, expanding $(x_i + y_i)(x_j + y_j)$ and simplifying,

$$ B(x,y) = \sum_{i,j} a_{i,j}x_iy_j. $$

We are now in a position to see why $a_{i,j} = B(e_i,e_j) = e_i\cdot e_j$. Notice that under the identification of $V$ and $R^n$ above that $e_i$ corresponds to the vector with $1$ in its $i$-th entry and $0$ elsewhere. Denote by $\delta_{a,b}$ for $a,b \in \{1,\ldots,n\}$ the usual Kronecker delta $$ \delta_{a,b} = \begin{cases} 1 & \text{if $a = b$}\\ 0 &\text{if $a\neq b$}\end{cases} $$ (here $1$ and $0$ are taken to be in $R$) for convenience, so that, for example $e_i$ is identified with the vector $(\delta_{i,1},\ldots,\delta_{i,n})$ in $R^n$. Then it follows from the simplified form of $B(x,y)$ above that

$$ B(e_i,e_j) = \sum_{r,s}a_{r,s}\delta_{i,r}\delta_{j,s} = a_{i,j} $$

since $\delta_{i,r}\delta_{j,s} = 0$ unless $r = i$ and $s = j$.