I am currently reading Serre's book on arithmetic. In chapter four (page 27) he defines a general notion of the quadratic form as: Let $V$ be a module of a commutative ring $A$. A function $Q$ is called a quadratic form on $V$ if: \begin{align*} &1) Q(ax)=a^2Q(x) \textrm{ for }a \in A \textrm{ and }x \in V$ \\ &2) \textrm{The function }(x,y) \rightarrow Q(x+y)-Q(x)-Q(y) \textrm{ is a bilinear form } \end{align*} Also he defines a sort of product: \begin{align*} x \cdot y = \frac{1}{2}(Q(x,y)-Q(x)-Q(y)) \end{align*} Then he gives a matrix of a quadratic form, which is the matrix $A=(a_{ij})$ where $a_{ij}=e_i \cdot e_j$, for a basis $(e_i)_{1\leq i \leq n}$ of $V$. Now I don't see how it is easy to see that this matrix is the matrix with respect to $Q$, considering this specially product $\cdot$ that he defined. Can someone explain me how this works?
2025-01-12 19:22:01.1736709721
Matrix of quadratic form (in Serre's general notion)?
104 Views Asked by TheBeiram https://math.techqa.club/user/thebeiram/detail At
1
There are 1 best solutions below
Related Questions in LINEAR-ALGEBRA
- Proving a set S is linearly dependent or independent
- An identity regarding linear operators and their adjoint between Hilbert spaces
- Show that $f(0)=f(-1)$ is a subspace
- Find the Jordan Normal From of a Matrix $A$
- Show CA=CB iff A=B
- Set of linear transformations which always produce a basis (generalising beyond $\mathbb{R}^2$)
- Linear Algebra minimal Polynomial
- Finding a subspace such that a bilinear form is an inner product.
- Is the row space of a matrix (order n by m, m < n) of full column rank equal to $\mathbb{R}^m$?
- Are these vectors linearly independent?
Related Questions in NUMBER-THEORY
- Page 99 of Hindry's Arithmetics, follows from exact sequence that $\text{N}(IJ) = \text{N}(J)\text{card}(J/IJ)$?
- How do I solve this over the integers
- How many ways to write a number $n$ as the product of natural numbers $\geq 2$?
- Representing integers as difference of semi-primes
- If $f,g$ are non-zero polynomials and $f$ divides $g$, then $\partial f \leq \partial g$.
- Conjugacy Class in Galois Representations
- Understanding Quadratic Residue Modulo n Structure
- Properties of the Gamma function
- Matrix of quadratic form (in Serre's general notion)?
- Find all pairs of positive integers $(n,k)$
Related Questions in ARITHMETIC
- The integer part of $x+1/2$ expressed in terms of integer parts of $2x$ and $x$
- Max value of $a$ given following conditions.
- Simplify to the nearest $0.1\%$ or nearest $0.1$
- Matrix of quadratic form (in Serre's general notion)?
- Finding percentage increase
- Removing additive factors from the denominator of a fraction
- Four square Lagrange Theorem.
- If $x= 2$ find the value of $3^{x+2} + 5^x - 7^{x-1}$
- Convert 5 N/mm to mm/N
- Adding fractions with variables and using common denominator. Merging and shortening $\frac{1}{2a+8} + \frac{4}{a^2-16} + \frac{4}{a-4}$
Related Questions in QUADRATIC-FORMS
- Solution of quadratic diophantine equations
- Matrix of quadratic form (in Serre's general notion)?
- How can I find values for which a given expression gives a perfect square?
- Compute the dimension of the space of quadratic forms
- Finding the matrix of a quadratic form
- Express a quadratic form in three variable in the format $x^tAx$ using a substitution $x=Py$
- Homework question on quadratic forms and change of coordinates
- Bilinear maps and functions of the form $(x,y) \mapsto ux^2 + 2vxy + wy^2$
- Quadratic form of Kroenecker products of skew-symmetric matrices
- Quadratic Form - find a minimal scalar $m \in \Bbb R$ such that $q(x,y,z) \le m(x^2+y^2+z^2)$
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Refuting the Anti-Cantor Cranks
- Find $E[XY|Y+Z=1 ]$
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- What are the Implications of having VΩ as a model for a theory?
- How do we know that the number $1$ is not equal to the number $-1$?
- Defining a Galois Field based on primitive element versus polynomial?
- Is computer science a branch of mathematics?
- Can't find the relationship between two columns of numbers. Please Help
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- A community project: prove (or disprove) that $\sum_{n\geq 1}\frac{\sin(2^n)}{n}$ is convergent
- Alternative way of expressing a quantied statement with "Some"
Popular # Hahtags
real-analysis
calculus
linear-algebra
probability
abstract-algebra
integration
sequences-and-series
combinatorics
general-topology
matrices
functional-analysis
complex-analysis
geometry
group-theory
algebra-precalculus
probability-theory
ordinary-differential-equations
limits
analysis
number-theory
measure-theory
elementary-number-theory
statistics
multivariable-calculus
functions
derivatives
discrete-mathematics
differential-geometry
inequality
trigonometry
Popular Questions
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- How to find mean and median from histogram
- Difference between "≈", "≃", and "≅"
- Easy way of memorizing values of sine, cosine, and tangent
- How to calculate the intersection of two planes?
- What does "∈" mean?
- If you roll a fair six sided die twice, what's the probability that you get the same number both times?
- Probability of getting exactly 2 heads in 3 coins tossed with order not important?
- Fourier transform for dummies
- Limit of $(1+ x/n)^n$ when $n$ tends to infinity
The short answer is that these are related in exactly the same way as the "usual" inner product on $\mathbb{R}^n$ and the (identity) matrix corresponding to the form $x_1^2 + \ldots x_n^2$.
I can't recall exactly the assumptions in Serre's book on the module $V$, but let's assume that it's a free module over some unital commutative ring $R$, so that it admits an $R$-basis $\{e_1,\ldots,e_n\}$ and we may identify $V$ with $R^n$. The relationship between a quadratic form $Q$ on $V$ and its matrix $A = A_Q$ is that for $x = (x_1,\ldots,x_n)^t \in R^n$ (identified with $\sum_k x_k e_k$ and with $x^t$ denoting the transpose of $x$) is
$$ Q(x) = x^t A x. $$
If we write $A = (a_{i,j})$ where $a_{i,j} \in R$ for each pair $(i,j)$, then the above equation says that
\begin{align*} Q(x) &= (x_1,\ldots,x_n)\begin{pmatrix}a_{1,1}x_1 + a_{1,2}x_2 + \ldots + a_{1,n}x_n \\ \vdots \\ a_{n,1}x_1 + a_{n,2}x_2 + \ldots + a_{n,n}x_n\end{pmatrix} \\ &= a_{1,1}x_1^2 + a_{1,2}x_1x_2 + \ldots + a_{1,n}x_1x_n \\ &\,+ a_{1,1}x_1x_2 + a_{1,2}x_2^2 + \ldots + a_{1,n}x_2 x_n \\ &\,+ \ldots \\ &\,+ a_{1,1}x_1x_n + a_{1,2}x_2x_n + \ldots + a_{1,n}x_n^2 \\ &= \sum_{i=1}^n\sum_{j=1}^n a_{i,j}x_ix_j. \end{align*}
So, how does this relate to the inner product $x \cdot y = \frac{1}{2}[Q(x+y) - Q(x) - Q(y)]$? For convenience, I'll set $B(x,y) = x\cdot y$. Then we can see from the above derivation that
$$ B(x,y) = \frac 1 2\left[ \sum_{i,j} a_{i,j}(x_i+y_i)(x_j+y_j) - \sum_{i,j} a_{i,j}x_ix_j - \sum_{i,j} a_{i,j}y_iy_j \right], $$
and so, expanding $(x_i + y_i)(x_j + y_j)$ and simplifying,
$$ B(x,y) = \sum_{i,j} a_{i,j}x_iy_j. $$
We are now in a position to see why $a_{i,j} = B(e_i,e_j) = e_i\cdot e_j$. Notice that under the identification of $V$ and $R^n$ above that $e_i$ corresponds to the vector with $1$ in its $i$-th entry and $0$ elsewhere. Denote by $\delta_{a,b}$ for $a,b \in \{1,\ldots,n\}$ the usual Kronecker delta $$ \delta_{a,b} = \begin{cases} 1 & \text{if $a = b$}\\ 0 &\text{if $a\neq b$}\end{cases} $$ (here $1$ and $0$ are taken to be in $R$) for convenience, so that, for example $e_i$ is identified with the vector $(\delta_{i,1},\ldots,\delta_{i,n})$ in $R^n$. Then it follows from the simplified form of $B(x,y)$ above that
$$ B(e_i,e_j) = \sum_{r,s}a_{r,s}\delta_{i,r}\delta_{j,s} = a_{i,j} $$
since $\delta_{i,r}\delta_{j,s} = 0$ unless $r = i$ and $s = j$.