As indicated by the title, why do we need at least $n$ equations (linearly independent?) to find $n$ unknowns?
Does this rule differ between linear case and non-linear case?
I found this related question: Necessary condition for uniqueness solution in a system of non-linear equations , however I am asking about the existance of a solution and not the uniqueness.
Edit : To make my quetsion more specific: I am having $n$ unknowns $x_1, \cdots, x_n $ and a generating equation $ f(x_1, \cdots, x_n,t)= 0$. $f$ here is a polynomial in $x_1, \cdots, x_n,t$. As the function $f$ depends on $t$, so if I change $t$ I get a new equation involving $x_1, \cdots, x_n $. I want to find the values of these $n$ knowns. So how many equations I need ( generated from varying $t$ in $f$) to gauranre a solution for $x_1, \cdots, x_n $ .
Giving some references in this manner would be highly appreciated.
Thank you in advance
Think of it as the intersection of hyper-surfaces. To begin with keep it simple and stick to the linear case in 2-d. So two unknowns and two equations. Each equation defines a line in a 2-d space. At some point they will intersect and this point is the solution. (If the lines are parallel then the equations are not independent and hence no solution.) Now if the equations were non-linear then they may not intersect, and hence no solution can be found.
Going back to the linear case but now in three dimensions. An equation now defines a plane in 3-d space. Two planes will intersect to give a line. The third plane will intersect this line to give a point which is the solution. In higher dimensions its just more of the same.