Relation between a vector $x$, its Vandermonde matrix $V(x)$ and non-linear system of equations

264 Views Asked by At

Consider the vector $x \in \mathbb{R}^N$ and its elementwise power $x^k = x \odot x \odot \cdots \odot x$ for $k$ times. What can we say about the vector space spanned by $x$ and its powers $x^k$ for $k=0, \ldots, K$?

Consider also the following system of equations $A1=0, Ax=0, Ax^2=0, \ldots, Ax^K=0$. When does the system have a solution?

EDIT: after some comments, I realized I was not sufficiently informative on my question. So here is my attempt to clarify some doubts and expand the question.

My question is: when can we say that we can find a vector $x \in {\bf R}^N$ such that it satisfies:

$$A \underbrace{[1 \; x \; x^{\odot2} \ldots x^{\odot K}]}_{V}=0$$ where $K<N$, and $A$ is a $M \times N$ matrix. The matrix $V$ is a Vandermonde matrix consisting of $K+1$ columns. By rewriting the above expression as:

$$V^\top A^\top=0$$ we see that we are searching for the solution of a non-linear system of equations on $x$. Assume that $x_i \neq x_j$ in at least $K+1$ entries, so that $V$ is full column-rank. The fact that all the rows of $A$ must sum to $0$ seems also very restrictive.

How can I find a possible solution? I was also thinking on the relationship of this problem with polynomial regression: there we have the Vandermonde matrix $V$ and we try to find the coefficients $a_i$; here is the opposite, we have the coefficients, but we want to find a "good" vector $x$ such that the $a_i^\top x^{\odot k}=0$ for different $k$.

1

There are 1 best solutions below

1
On

First, note that you are excluding $\vec{x}^{\circ0}=(1,1,\dots,1)$. The theory is much more natural once this vector is included.

The matrix consisting of the first $N$ Hadamard powers (from $0$ to $N-1$) of $\vec{x}$ is known as the (transpose of) the Vandermonde matrix $\mathcal{V}(\vec{x})$. Simply symbol-pushing, \begin{align*} &\vec{0}=\mathcal{A}\vec{x}^{\circ0}=\mathcal{A}\vec{x}=\dots \\ &\iff\mathcal{A}\mathcal{V}(\vec{x})^{\mathsf{T}}=0 \\ &\iff\mathcal{V}(\vec{x})\mathcal{A}^{\mathsf{T}}=0 \end{align*} It is also straightforward to see that $\mathcal{V}(\vec{x})$ is full-rank iff $\vec{x}$ has no repeated entries. Moreover, one can reduce to the "no repeated entries" case by removing the duplicates.

Thus we see:

  • If $\vec{x}$ has exactly $k$ distinct entries, then $\{\vec{x}^{\circ j}\}_{j\in[k]}$ are a basis on their span.
  • Each row of $\mathcal{A}$ corresponds to a polynomial vanishing on the entries of $\vec{x}$.

To remove the dependence on $\vec{x}^{\circ0}$, note that this is the same as multiplying the $j$th row of $\mathcal{V}(\vec{x})$ by $x_j$, which then carries over to the determinant. Thus the above still holds as long as we consider zeros in $\vec{x}$ to be duplicates of an "implicit hidden $0$".