I've got a large system of equations: $$ \begin{align*} (2^0)^na_n + (2^0)^{n-1}a_{n-1} + \cdots + (2^0)^1a_1 &= 4^0 \\ (2^1)^na_n + (2^1)^{n-1}a_{n-1} + \cdots + (2^1)^1a_1 &= 4^1 \\ \vdots\\ (2^{n-1})^na_n + (2^{n-1})^{n-1}a_{n-1} + \cdots + (2^{n-1})^1a_1 &= 4^{n-1} \\ \end{align*} $$ for $n\geq2$. I'd like to show that the (unique) solution of this system is when $a_2=1$ and all other variables are zero. I'm not very familiar with linear algebra, however I tried putting these equations in matrix form. My attempt involved using induction of variable $n$, and thinking of one matrix as a "sub-matrix" of the next one. Again, due to limited knowledge of linear algebra, I got nowhere with it.
Also, is induction a good idea for such a problem or would you use another method of proof?
Thank you.
Suppose that $(a_1,\ldots,a_n)$ is a solution of the proposed system. Consider $P(X)=-X^2+\sum_1^na_jX^j$. This is a polynomial of degree smaller or equal to $n$, with $P(0)=0$. Moreover, by assumption $P(2^{i})=0$ for every $i=0,\ldots n-1$, so $P$ has at $n+1$ zeros, and it must be identically zero. So $(a_1,a_2,\ldots,a_n)=(0,1,\ldots,0)$. The converse is trivially true.