It seems intuitively correct to say that a system of polynomial equations has finitely many solutions if there are as many equations as there are variables in the system. However, how would you prove that a system of polynomial equations with fewer equations than variables has infinitely many solutions (corresponding to the "under-determined" case in linear systems)? How would you characterize when a subset of equations "coincide" (you can combine several to get another, like linear dependence in linear systems)?
For context, I'm in the middle of an abstract algebra course, and we've just finished Chapter 7 of Dummit & Foote (though this question isn't yet directly related to the course, that's just my background knowledge).
This question turns out to be deeper than it seems, since it has a much richer theory than the linear case. In particular, it's one of the questions leading to elimination theory.
To illustrate why this problem is more complex than it seems, let me give an example of system of polynomials that can have either zero, finitely many or infinitely many solutions depending on what kind of solutions you are looking for:
$$ \{ x^2-2, y^2 + z^2 \}$$
First, note that this system has no rational solutions, since there is no $x \in \mathbb{Q}$ such that $x^2 = 2$. Next, over the real numbers, the system has exactly two solutions, $x = \pm\sqrt{2}, y = z = 0$, since for any $y, z \in \mathbb{R}$ where $y \neq 0$ or $z \neq 0$, $y^2+z^2 > 0$. Finally, over the complex numbers, this system has infinitely many solutions, namely those where $x = \sqrt{2}$ and $y = i z$ or $y = -i z$.
Over the complex numbers (or more generally, over an algebraically closed field), one can say that we have $n$ variables and $k$ polynomials, where $k < n$, there are either no solutions or infinitely many solutions, using a more general notion of dimension (a key ingredient in this argument is Krull's Principal Ideal Theorem). Over other fields, these questions turn out to be significantly more complex.
Algorithmically, one would use Buchberger's algorithm to transform the set of polynomials into a Gröbner Basis (using a lexicographic term ordering), which generalizes row-echelon form. If we have $n$ variables $x_1, \ldots, x_n$ and the Gröbner basis contains a polynomial containing only variables $x_i, \ldots, x_n$ for each $i = 1, \ldots, n$, we know that the system can have only finitely many solutions. Conversely, if the system does not contain such polynomials for one of the $i$, and the field we're interested in is algebraically closed, we know that there must be infinitely many solutions.
Edit: $x^2 = 2$ has two solutions, not one. Thanks @lisyarus!