I am having trouble reconciling myself with a solution presented in the text Galois Theory by Harold M. Edwards. The question is presented here.
For context, the symmetric functions $\sigma_1, \sigma_2, \dots, \sigma_n$ in $x_1, x_2, \dots, x_n$ are understood to be the elementary symmetric polynomials $$ \sigma_1 = x_1 + x_2 + \dots x_n \\ \sigma_2 = x_1 x_2 + x_1 x_3 + \dots + x_2 x_3 + \dots x_{n-1} x_n \\ \vdots \\ \sigma_n = x_1 x_2 \dots x_n. $$
The solution, also present in the book here, skims over some details which I have tried to prove and was unable to. My question pertains to the particular subcase for when $F$ can by written as $G \cdot y_n^k$ where $k$ is a nonzero integer and $G$ is a polynomial in the $y$'s that contains a nonzero term without a power of $y_n$. Here, why must it be so that $G(\sigma_1, \sigma_2, \dots, \sigma_n)$ must be the zero polynomial if $F(\sigma_1, \sigma_2, \dots, \sigma_n)$ is? We have that $$ F(\sigma_1, \dots,\sigma_n) = G(\sigma_1, \dots, \sigma_n) \sigma_n^k \equiv 0 $$ but $\sigma_n$ may be zero with $G(\sigma_1, \dots, \sigma_n)$ nonzero. Since $\sigma_n$ is zero when at least one of $x_1, \dots, x_n$ is zero, is this restriction enough to show that $G(\sigma_1 \dots)$, or equivalently in this case, the polynomial term of $G(\sigma_1 \dots)$ independent of $\sigma_n$ is zero? I am struggling to approach this, and beginning to wonder whether the solution is valid.
The $x_i$ are the polynomial symbols, they are not $0$ in the polynomial ring $K[x_1,\dots,x_n]$.