Why do we generally require $n$ equations for $n$ unknowns?

317 Views Asked by At

Ever since I wrote my first $x$, it was drilled firmly into my head that generally, to "solve" for $n$ variables $\{x_1, \ldots, x_n\}$ you need to specify $n$ functions $\{f_i : X^n \to R\}$ that vanish at all your $x_i$. This was a generally necessary and sufficient condition to get a finite (nonzero) number of solutions.

Some years down the road, I learned linear algebra, and the case for systems of linear equations was clear: "well-posed" solve for x problems were identified by full rank. For rank-deficient matrices, either there were 0 or infinitely many solutions. The solution space in the latter case could still be quantified by dimension.

Is there a way to formalize this idea a little bit more rigorously for continuous functions in general? I sort of understand the idea of dimension in algebraic geometry—my idea is that the "effectiveness" of a system of equations is measured by the dimension of their variety, as each time you mod the coordinate ring by one equation, this is equivalent to "substituting" one equation into the other like we all did when we were kids. Does this algebraic dimension agree with a linear-algebraic intuition for dimension (dimension of tangent space)? Is there a concrete (possibly differential) way to compute this dimension efficiently by hand?

2

There are 2 best solutions below

0
On BEST ANSWER

One way for formalize this is by appealing to Sard's theorem: if $F: R^n\to R^m$ is a $C^\infty$ function then for "generic" $b\in R^m$ the solution set of the equation $F(x)=b$ is a smooth manifold of dimension $n-m$. (Possibly empty!) In particular, if $m=n$ then for "generic" $b$, the solution set is a discrete subset of $R^n$ and for $m>n$ for "generic" $b$, the solution set is necessarily empty. Here "generic" means "the complement to a measure zero subset". Instead of taking "generic" $b$ one can consider "generic" $F$ but defining rigorously this requires more (but not much more) work.

One can improve the regularity hypothesis here (reduce the required order of smoothness, see the link), but there are no such theorems (as far as I know) for functions which are merely continuous.

0
On
  • The solution set of one linear equation $f(x)=0$ in a vector space is a hyperplane.

  • The solution set of one nonlinear equation $f(x)=0$ in an euclidean space is a hypersurface.

In both cases, the dimension of the solution set is one less than the dimension of the ambient space, provided the equation is not the zero equation in the linear case and that $0$ is a regular value of $f$ in the nonlinear case.

From this, it follows that every time you add an equation, the dimension of the solution set goes down by 1, provided each equation is regular with respect to the previous solution set. That translates into independence of the equations in the linear case and transversality in the nonlinear case.

So, after $n$ equations the solution set has dimension $0$ and so is a single point in the linear case and a set of isolated points in the nonlinear case.