Find a new quadratic form through change of variables

767 Views Asked by At

Consider a quadratic form $$f(x) = \sum_{i=1}^{n}a_i x_i ^2$$ $f$ represents $0$ non-trivially. Then I need to show that by a linear change of variables, I can get the following quadratic form $$f(y) = \sum_{i=1}^{n-2}b_i y_i ^2 - 2 b_ny_{n-1}y_n$$

Now as suggested in Find a change in variable that will reduce the quadratic form to a sum of squares by Gerry Myerson, I put $x = Py$ , for some $n \times n$ matrix $P$. Then $$f(x)=f(Py)=(Py)^tAPy=y^t(P^tAP)y$$ My question is how do I find the matrix $P$ s.t. I get $f(y)$ as mentioned above.

2

There are 2 best solutions below

0
On

I do not have a complete answer, just some ideas. If we take: $$ B = \begin{bmatrix} b_{1} & 0 & 0 & 0 & 0 \\ 0 & \ddots & \vdots & \vdots & \vdots \\ 0 & & b_{n-2} & 0 & 0\\ 0 & \dots & 0 & 0 & -b_n \\ 0 & \dots & 0 &-b_n & 0 \end{bmatrix} $$

then $f(y) = \sum_{i=1}^{n-2}b_i y_i ^2 - 2 b_ny_{n-1}y_n = y^{t} B y $. So we are interest in finding an invertible matrix $P$ such that $P^{t} A P = B$ with $A=\text{diag}(a_1, a_2, \dots a_{n-1})$.

Since $B$ is symmetric with set of eigenvalues $\text{spec}(B)=\{ b_1, b_2, \dots b_{n-2}, \pm b_n \}$, there exists an orthogonal matrix $Q$ such that $$ B=QDQ^{t} $$ where $D=\text{diag}(b_1, b_2, \dots b_{n-2}, \pm b_n)$. Hence we must find $P$ such that:

$$ P^{t}AP=QDQ^{t} \Longrightarrow D=(PQ)^{t}A(PQ). $$ Where in the last equation all matrices are known exept $P$, but we must find a clever idea for computations. Hope this helps.

0
On

I should emphasize that the linear change of variables for a quadratic form is typically taken to be a matrix $P$ of determinant $1,$ with transformed matrix $P^T A P.$ It is not generally required that $P$ be orthogonal.

The question as stated is false for $n-1$ positive diagonal elements and a single zero. It is true if there are at least two zeros, just permute so those are at the end, your $b_n$ becomes zero. Permuting by a matrix $P$ of determinant $+1,$ that is $P^T A P$ for a pair of diagonal entries, is $$ \left( \begin{array}{rr} 0 & -1 \\ 1 & 0 \end{array} \right) \left( \begin{array}{rr} u & 0 \\ 0 & v \end{array} \right) \left( \begin{array}{rr} 0 & 1 \\ -1 & 0 \end{array} \right) = \left( \begin{array}{rr} v & 0 \\ 0 & u \end{array} \right) $$

If there are at least one positive diagonal entry and one negative, first consider their product to be $-1.$ Permute, two entries at a time, so that these are the last two diagonal entries. Then $$ \left( \begin{array}{rr} \frac{1}{2} & -\frac{w}{2} \\ \frac{1}{w} & 1 \end{array} \right) \left( \begin{array}{rr} w & 0 \\ 0 & - \frac{1}{w} \end{array} \right) \left( \begin{array}{rr} \frac{1}{2} & \frac{1}{w} \\ - \frac{w}{2} & 1 \end{array} \right) = \left( \begin{array}{rr} 0 & 1 \\ 1 & 0 \end{array} \right) $$ This is a small part of Lemma 2.1 on page 15 of Rational Quadratic Forms by Cassels, that a regular isotropic quadratic space contains a hyperbolic plane. I recommend Cassels.

If the product of the two diagonal elements is $- c^2$ for real $c,$ we can find an appropriate $w$ to give $$ \left( \begin{array}{rr} \frac{1}{2} & -\frac{w}{2} \\ \frac{1}{w} & 1 \end{array} \right) \left( \begin{array}{rr} cw & 0 \\ 0 & - \frac{c}{w} \end{array} \right) \left( \begin{array}{rr} \frac{1}{2} & \frac{1}{w} \\ - \frac{w}{2} & 1 \end{array} \right) = \left( \begin{array}{rr} 0 & c \\ c & 0 \end{array} \right) $$