Understanding the proof: $x^TAx = 0$ $\implies$ A is also skew-symmetric

111 Views Asked by At

While reviewing this answer to the question which was asked before, I caught an error in my mind based on what I learnt so far. Here is the answer:

It is true. We have: $$(x+y)^TA(x+y) = 0 \implies x^TAx + y^TAx + x^TAy + y^TAy = 0.$$But $x^TAx = y^TAy = 0$, so we have: $$x^TAy = -y^TAx.$$Take $x = e_i$ and $y = e_j$ to get $a_{ij} = -a_{ji}$.

My concern is about this part:

Take $x = e_i$ and $y = e_j$ to get $a_{ij} = -a_{ji}$

My question is, isn't this a particular example? I mean we said $x^TAx=0$ and it is true for all $x$, so in one sub-solution (which is $x=e_i$ and $y=e_j$), we said $a_{ij} = -a_{ji}$ how about if for other $x$ and $y$ vectors this not happen? I mean if there is a $x$ and $y$ for example which results in $a_{ij} = 2a_{ji}$?

2

There are 2 best solutions below

0
On BEST ANSWER

Given $ x^t A x = 0 $ for all vectors x, let's consider the vectors $x$ and $ y $:

$$ (x + y)^t A (x + y) = x^t A x + x^t A y + y^t A x + y^t A y $$

Given our initial condition, $( x^t A x = 0 $) and $( y^t A y = 0 $), the equation becomes:

$$ x^t A y + y^t A x = 0 $$

Since this is true for all vectors $x$ and $y$, we have:

$$ x^t A y = -y^t A x $$Since $y^tAx$ is scalar , it is equal to its transpose i.e $x^tAy$ , so this implies $$ x^t A y = x^t (-A^t) y $$ implies$$ x^t (A + A^t) y = 0 $$

For the expression $ x^t (A + A^t) y $ to be zero for all $x $ and $y$, we claim that the matrix $( A + A^t $) must be the zero matrix i.e
$$\forall x,y~~;x^t (A + A^t) y=0 \Rightarrow A+A^t=0$$ I will prove this by contradiction, i.e. $$if~ A+A^t\neq0 \Rightarrow \exists x,y~~ ;~~x^t (A + A^t) y \neq 0 $$
when $A+A^t \neq 0$ this means there exists at least one non zero element in it . For that we can choose $x$ as $e_i$ and $y$ as $e_j$ such that $(i,j)$ = coordinates of that non zero element and so $e_i^t(A+A^t)e_j = (A+A^t)_{ij}\neq 0$
So $$ A + A^t = 0 $$ $$A=-A^t$$

0
On

You can rewrite $x^TAy = -y^TAx$ in a more extended form as $$ \sum_{i,j}x_iy_j(a_{i,j} + a_{j,i}) = 0 \tag{1} $$ which must be true for all possible $x$ and $y$.

What we are trying to do here is to solve this in terms of relations between matrix coefficients $a_{i,j}$.

Author in the answer you're citing suggests to choose such $x$ and $y$ that all the elements in sum $(1)$ but one $x_iy_j(a_{i,j} + a_{j,i}) = a_{i,j} + a_{j,i}$ eliminate, which leads to solution: $$a_{i,j} = -a_{j,i} \quad \forall\;i, j. \tag{2}$$


What we need here now is to answer two questions: is this solution 1) sufficient and 2) necessary. If so, this would mean that $(1)$ is simply equivalent to $(2)$.

It's pretty easy to see that this system is sufficient to solve $(1)$ for all $x$, $y$ (you just have all the summands to be zero in left hand side of equation). This means that no more relations to $a_{i,j}$ than we have in $(2)$ needed.

Necessity means that none of relations from $(2)$ can be removed. Consider any other system with some pair of indices $i,j$ for which $a_{i,j} + a_{j,i} \neq 0$. For it we can take again $x = e_i$, $y = e_j$ and get straightforward contradiction to $(1)$: $$ \sum_{i,j}x_iy_j(a_{i,j} + a_{j,i}) = a_{i,j} + a_{j,i} \neq 0. $$ Thus, $(2)$ is necessary for $(1)$.

This is pretty much it, we can now claim that $(1)$ is equivalent to $(2)$ and so this is only possible solution – anything else must be equivalent to it.