If we have consistent system of linear equations, each of which has variables $x_1, x_2, \ldots, x_n$, then we will require having $n$ many equations in order to find the one-true single solutions set for all those $x_1, x_2, \ldots, x_n$ variables.
My question is: What if the equations were nonlinear? Then, how many of them would we need to solve the $x_1, x_2, \ldots, x_n$?
I found this but it does not give any crisp answer. E.g. it does not say whether $>n$ equations can be necessary for some kind of $n$-variabled equations.
In contrast to the linear case, that depends quite heavily on what kind of numbers you accept as a solution!
Let's deal with the case of real or rational numbers first. Then a single equation can be sufficient for any number of variables: you will find that $x_1^2 + ... + x_n^2 = 0$ has exactly one solution over $\mathbb{R}$ or $\mathbb{Q}$.
Over the complex numbers, things are a bit different - since the field of complex numbers is algebraically closed, we find that for $n$ variables, we need at least $n$ equations. Proving this is quite a bit more difficult than in the linear case, though, and requires heavy tools from commutative algebra.
In general, this kind of question is the starting point for studying Algebraic Geometry; the specific question is tackled as part of dimension theory.
Edit: in response to the edit - this is a non-trivial question. There are examples where more than n equations are required - the key word is "incomplete intersection "
Edit: Here's an example of a system of three equations in two variables where any two equations do not determine a unique point: $$ \begin{align} x \cdot y = 0\\ x \cdot (x+y) = 0\\ y \cdot (x+y) = 0 \end{align} $$ All three systems combined have a single solution, $x = y = 0$. The first and second allow for $x = 0$, $y$ arbitrary, the first and third for $y = 0$, $x$ arbitrary, and the second and third for any $x, y$ such that $x + y = 0$.
Edit: A bit more detail.
My idea in constructing the example was to find three polynomials in two variables so that for any two of them, their zero sets would have a component in common. To keep things simple, I chose to pick three polynomials whose zero sets were unions of lines, like this:
Now, as noted in the comments, if $p$ has zero set $P$ and $q$ has zero set $Q$, $p \cdot q$ has zero set $P \cup Q$, so it's fairly easy to derive the equations of the three graphs in question (just compute the equations of each line in turn, and multiply).
This generalizes easily: let $p_1, \ldots, p_k$ be polynomials such that for $i \neq j$ $p_i$ and $p_j$ have exactly one common solution, and each $p_i$ has infinitely many zeros. Define $q_i := p_1 \ldots p_{i-1} \cdot p_{i+1} \cdots p_k$; this gives a set of $k$ polynomials such that any $k-1$ have at least two solutions, but the entire set has at most one.
There is a lot more that can be said here, but at this point, the best course of action is to go and read an introduction to Algebraic Geometry.