I've read in a bunch of texts that the S-polynomial of two polynomials with relatively prime leading terms always reduces to zero, i.e. the remainder of the division of $S(f,g)$ by a set of polynomials containing $f$ and $g$ is zero.
Can someone give me a hint about the proof? I've read in an answer to this post that whenever $\gcd(\mathrm{LT}(f),\mathrm{LT}(g))=1$ we can write $S(f,g)=-(g-\mathrm{LT}(g))f+(f-\mathrm{LT}(f))g$, but I don't know why this should help: why can we say that that form of $S(f,g)$ as a polynomial combination of $f$ and $g$ comes from a division? I know that not all the expressions of a polynomial $f\in k[x_1,\dots, x_n]$ as $f=p_1q_1+\cdots+p_sq_s+r$ come from a division of $f$ by $\{q_1,\dots, q_s\}$...
This is as in the reference that you propose related to improvements in Buchberger's algorithm. In the case of Cox's Ideals, Varieties and Algorithms the relevant sections to understand of the proof I want to share are Proposition 4 of section 2.9 and Exercise 11 from section 2.3.
The idea goes as follows:
Define $\exp(F)$ as the exponent of the leading term of polynomial $F$ and $N(F)$ as the set of exponents of the monomials of the distributive form of $F$. For our polynomials write $\alpha = \exp(F),\beta = \exp(G)$.
In our conditions $\gcd(x^{\alpha},x^{\beta}) = 1$ so $\operatorname{lcm}(x^{\alpha},x^{\beta}) = x^{\alpha+\beta}$. Write also $F = X^{\alpha}+F'$, $G = X^{\beta}+G'$ where we are assuming for simplicity (as Cox does) that $F,G$ are monic. Then we express the $S$-poynomial as:
$$S(F,G) = X^{\alpha+\beta-\alpha}F-X^{\alpha+\beta-\beta}G = X^{\beta}F - X^{\alpha}G = X^{\alpha+\beta}+X^{\beta}F'-X^{\alpha+\beta}-X^{\alpha}G' = X^{\beta}F'-X^{\alpha}G' = X^{\beta}F'-X^{\alpha}G'+F'G'-F'G' = F'(G'+X^{\beta})-G'(X^{\alpha}+F') = F'G-G'F = Q_1F+Q_2G.$$
We are still not happy with this expression since the precise meaning of $R(S(F,G),\{F,G\}) = 0$ requires that the computed remainder is unique. So if you revisit exercise 11 above you'll see that you need to check two further conditions:
$\exp(F)+N(Q_1) \subseteq \Delta^1 = \exp(F)+\mathbb{N}^n$
$\exp(G)+N(Q_2) \subseteq \Delta^2 = \exp(G)+\mathbb{N}^n \setminus \Delta^1$
The second is the "difficult" one; by contradiction:
Suppose $\gamma \in N(Q_2)$ such that $\beta+\gamma \in \Delta^1$ then $\beta +\gamma = \alpha + \delta$ with $\delta \in \mathbb{N}^n$. I can define a $\lambda \in \mathbb{N}^n$ such that $\gamma = \alpha + \lambda$, but this gives you a contradiction since $\exp(F') > \exp(F) = \alpha$ and with the previous expression we would have $\gamma > \alpha$ where $\gamma \in N(F')$.
The precise $\lambda$ is $\lambda_i = \gamma_i$ if $\beta_i > 0$ (because in this case $\alpha_i = 0$ since $\gcd(x^{\alpha},x^{\beta}) = 1$), and $\lambda_i = \delta_i$ if $\beta_i = 0$.
I think this is easier than the one in Adams once you grasp the notation and have the necessary background specially regarding the precise formulation of division algorithm with unique remainder and quotients.