I was reading a proof on the non-convexity (even locally) of loss landscape in high-dimensional neural networks. Specifically, in the paper, it seems like the proof of proposition 2 at some point uses the following fact:
Let $A,B \in \mathbb{R}^{m\times m}$ be symmetric matrices. Suppose that $\operatorname{rank}(A) \leq n$ and that $\operatorname{rank}(B) > 2n$. Then we can always find a unit vector $v$ such that $v^{\intercal}Av = 0$ but $v^{\intercal}Bv \neq 0$.
Is there a reason we should expect such a $v$ to exist? If so, is the condition $\operatorname{rank}(B) > 2n$ necessary? Or is it just sufficient? Simultaneous vanishing of quadratic forms seems to be similar but not quite the same thing, and I wasn't able to find anything else about this. Any insight into this would be appreciated!
Edit:
Here's what I've come up with thus far.
The problem is equivalent to showing that there exists a vector $v$ such that $v\in null(A)\cap col(B)$. This is equivalent to showing that there is a zero eigenvalue to the generalized eigenvector problem: $$ Av = \lambda Bv $$ With $\lambda = 0$. Let $U = null(A)$, $V = col(B)$. Since $rank(A) \leq n$, $dim(U) > m-n$ and $dim(V) > n$. Thus, $$ dim(U+V) = dim(U) + dim(V) - dim(V\cap U) $$Since $dim(U + V) \leq m$ and $dim(U) + dim(V) > m$, then $dim(V \cap U) \geq 1$, showing that $V \cap U \neq \emptyset$.