I am trying to implement a method from as paper that solves a practical problem. The core part is solving of a system of polynomial equations of same total degree.
We have a system of 10 polynomial equations on $x, y, z$ of degrees 3
$$F_1(x, y,z)=F_2(x,y,z)=...=F_{10}(x,y,z)=0$$
The hidden variable method mentioned in the paper suggests next representation: $$F_i = f_{i1}(z)x^3 + f_{i2}(z)y^3 + f_{i3}(z)x^2y + f_{i4}(z)xy^2 + ... + f_{i8}(z)x + f_{i9}(z)y + f_{i10}(z)1$$ where $f_{ij}$ is a polynomial of z of correspondent degree from 0 ($f_{i1}$ for example) to 3 ($f_{i10}$).
Now we can look write our system of equations as $$A(z)X=0$$ where $A(z)={f_{ij}(z)}$, $X=\begin{bmatrix}x^3 & y^3 & x^2y & xy^2 & x^2 & y^2 & xy & x & y & 1\end{bmatrix}^T$
From linear algebra we know that this system can have a nontrivial solution only if $det(A(z))=0$
So writing down $det(A(z))$ as a function of will give us a single-variable polynomial on $z$. Its $k$ roots ${z_k}$ will give us possible values of $z$ that a solution of the system can have.
The part above is clear for me, but next steps are not. It is said that eigenvectors of $A(z_k)$ are solutions for the system! So we have to find an eigenvector, take its 8th and 9th components (which correspond to $x$ and $y$ in $X$ vector) and we'll get a valid solution for the whole system. That sounds like a magic for me. Is this guaranteed? Will, for example, 7th component of the vector (it corresponds to $xy$) be a product of 8th and 9th? What gives such guarantees etc. It's also mentioned that it's true due to the fact that $X$ has all bivariate monomials of $x$ and $y$ of degrees up to 3 and I have no idea why it's so.
I tried to check if this method works on a system of 3 quadratic equations on $x$ and $y$. For example, consider the system
$$x^2 + 2xy + 3y^2 + 4x + 5y + 6 = 0$$ $$6x^2 + 3xy + 2y^2 + x + 5y + 4 = 0$$ $$3x^2 + 2xy + 6y^2 + 4x + y + 5 = 0$$
It can be rewritten as $A(y)X = 0$ where $$A = \begin{bmatrix}1 & 2y + 4 & 3y^2 + 5y + 6 \\ 6 & 3y + 1 & 2y^2 + 5y + 4 \\ 3 & 2y + 4 & 6y^2 + y + 5 \end{bmatrix}$$ and $$X = \begin{bmatrix}x^2 \\ x \\ 1 \end{bmatrix}$$
Determinant of such matrix is a polynomial: $$det(A(y)) = -37y^3 - 33y^2 + 111y + 43$$
The roots of this polynomial are: $$y_1 \approx -2.069961\\ y_2 \approx -0.364067 \\y_3 \approx 1.5421368$$
Now take for example $y_1$ and substitute it to $A(y)$
$A(y_1) = \begin{bmatrix} 1.00000 & -0.13992 & 8.50441 \\ 6.00000 & -5.20988 & 2.21967 \\ 3.00000 & -0.13992 & 28.63848 \end{bmatrix}$
Using SVD decomposition we get an eigenvector correspondent to zero eigenvalue $p = \begin{bmatrix} 0.668080 \\ 0.741125 \\ -0.066363 \end{bmatrix} $. Normalizing $p$ by dividing it by -0.066363 we should get an answer (a valid $x$ such that $(x, y_1)$ satisfy our system), but it's not a valid answer. So no magic happened! :( Neither it did for nullspace vector for $A(y_2)$ and $A(y_3)$.
Now let's consider a system that has solutions. For example, $$ x + 2y = 5 \\ 3x + 4y = 6$$
Can be rewritten as $$A(y)X = 0$$
where
$$A(y) = \begin{bmatrix} 1 & 2y-5 \\ 3 & 4y - 6 \end{bmatrix}$$ and $$X = \begin{bmatrix} x \\ 1 \end{bmatrix}$$
Then $det(A(y))$ is a polynomial $-2y + 9$ with a root $y_1 = -4.5$
Substituting it back gives us $A(y_1) = \begin{bmatrix} 1 & 4 \\ 3 & 12 \end{bmatrix}$ with a basis vector of its nullspace $p = \begin{bmatrix} -0.97014 & 0.24254 \end{bmatrix}$, dividing it by its last component gives a correct answer $x = -4$, the magic did happen indeed.
I understand that these questions are answered by algebraic geometry so I started studying it (Cox, Little, O'Shea, Using Algebraic Geometry) but things go really slowly. Due to my working schedule, it seems I'll be able to go through it in a year or more. But I hate using methods that I don't get at least at some intuitive level.
Is the amount of theory needed to be understood can be worked in a, say, a week? Any advices on books or maybe some intuitions or explanations would be really-really helpful.


Don't forget the eigenvectors are not unique, meaning that if $v$ is an eigenvector of matrix $A$ then so is $-v$. Remember the definition of an eigenvector v of $A$ being any v that satisfies the equation: $A v = \lambda v$ and so -v will satisfy too: $A (-v) = \lambda (-v)$.
Also, don't forget that the solution to $Ax=0$ is the right-singular vector of $A$, not the eigenvector of $A$. That is, first form matrix $M=A^{T}A$ and the solution is the eigenvector of $M$ corresponding to the smallest eigenvalue of $M$.