For a lecture that I am preparing as part of an introductory course on algebraic varieties, I am trying to give an elementary approach to non-singular points of affine algebraic sets.
I work over an infinite field $k$ of characteristic $0$, not necessarily algebraically closed, and I would like to show that, if $Z\subset k^n$ is an algebraic set whose ideal $$\mathcal{I}(Z):= \lbrace P \in k\left[x_1,\ldots ,x_n\right]\ |\ \forall x\in Z, P(x)=0\rbrace$$ is generated by a family of polynomials $(f_1,\ldots,f_m)$ such that the Jacobian matrix $$J_x = \begin{pmatrix} \frac{\partial f_1}{\partial x_1}(x) & \ldots & \frac{\partial f_1}{\partial x_n}(x) \\\ \vdots & & \vdots \\\ \frac{\partial f_m}{\partial x_1}(x) & \ldots & \frac{\partial f_m}{\partial x_n}(x) \end{pmatrix},$$ has rank $m$, then the tangent cone at $x$ is equal to the Zariski tangent space at $x$ (of course, this can only happen if $m\leq n$).
The Zariski tangent space $T_xZ$ is (almost by definition) the kernel of $J_x$, meaning that $$T_xZ = \bigcap_{i=1}^m\ker f_i'(x)\ ,$$ while the tangent cone $C_x(Z)$ is defined to be the common zero locus of all initial terms of polynomials in $\mathcal{I}(Z)$: for all $f\in \mathcal{I}(Z)$, denote by $f^\ast_x$ the first non-zero term in the Taylor expansion of $f$ at $x$ ($f^\ast_x$ is therefore a symmetric $r$-linear form on $k^n$) and define $$C_x(Z) = \lbrace h\in k^n\ |\ f^\ast_x(h,\ldots,h)=0\rbrace.$$
Since $f'_i(x)$ is the initial term of $f_i$ for all $i\in\lbrace 1,\ldots,m\rbrace$, we always have $C_x(Z)\subset T_xZ$. So the goal is to use the regularity assumption on the Jacobian matrix to prove the converse inclusion.
There is a proof of this in Corollary 10.14-(b) of Andreas Gathmann's 2021 lecture notes (p.82) but it uses the notion of dimension of an algebraic variety, which I do not yet have in the course:
https://www.mathematik.uni-kl.de/~gathmann/class/alggeom-2021/alggeom-2021.pdf
Note that, for all $f\in\mathcal{I}(Z)$, we can write $f=\sum_{i=1}^m a_i f_i$ for some polynomials $a_i\in k[x_1,\ldots,x_n]$. Since the linear forms $(f_1'(x),\ldots,f_m'(x))$ are linearly independent, the linear term of $f$ at $x$ is $0$ if and only if $a_i(x)=0$ for all $i \in\lbrace 1,\ldots,m \rbrace$. Can we use this to say something about the initial term of $f$ at $x$? Is this initial term necessarily $0$ on $(h,\ldots,h)$ if $h\in\ker J_x$?
Notes
- Maybe using the Taylor expansion in place of changing the coordinates in order to be able to assume that $x=0$ makes things more complicated: if we do change coordinates and assume that $x=0$, then we can just think of the Taylor expansion as the decomposition of $f$ into homogeneous components and the initial term is just the non-zero homogeneous component of lowest degree. Is it easier to say something about the initial term of $\sum_{i=1}^m a_if_i$ if we think of it this way? An argument along these lines would also have the advantage of being valid in positive characteristic, I believe.
- In case I am missing something by not supposing $k$ algebraically closed, this assumption can be added (it is not quite what I would like at this basic stage of the course, but it is OK).
Edit: currently incomplete
Taylor expansion plus coordinate change is in fact the easiest way to prove what you're after.
Change coordinates so that $x=(0,\cdots,0)$. Now the Taylor expansions of the $f_i$ are just the decomposition of the $f_i$ in to homogeneous parts. The fact that the origin is a zero of all the $f_i$ imply that none of them have constant terms, and then the fact that the Jacobian matrix has rank $m$ implies that the linear terms of these $m$ polynomials form a linearly independent set in the vector space of $k\langle x_1,\cdots,x_n\rangle$: writing $f_i = \sum_{j=1}^n a_{ij}x_j + (\text{higher order terms})$, we see that the vector $$\begin{pmatrix} \frac{\partial f_i}{\partial x_1}(x) & \cdots & \frac{\partial f_i}{\partial x_n}(x)\end{pmatrix}$$ is exactly $\begin{pmatrix} a_{i1} & \cdots & a_{in}\end{pmatrix}$. So the fact that the Jacobian matrix is of full rank implies the matrix of linear terms of the $f_i$ is of full rank, and therefore they are the leading terms of the $f_i$ and the ideal they generate (which cuts out the tangent cone) has exactly the same zero set as the tangent space. EDIT: as pointed out in the comments, this last claim that the initial ideal of $(f_1,\cdots,f_m)$ is generated by the linear terms of the $f_i$ needs justification.