I'm reading through Justin Smith's Introduction to Algebraic Geometry. Before getting into coordinate rings, he talks about Grobner bases.
He's given a division algorithm in which given and ordering of monomials in $k[X_1,\ldots,X_n]$ and a set of polynomials $\{f_1,\ldots,f_n\}$, for any $f\in k[X_1,\ldots,X_n]$ this algorithm will produce polynomials $\{a_i\}$ such that $$f = a_1f_1 + \ldots a_nf_n + R$$
Of course here it seems clear the point is that whenever $R=0$, $f\in(f_1,\ldots,f_n)$.
He then gives an example calculation, with $f_1 = XY-1$, $f_2=Y^2-1$ and $f=X^2Y=XY^2+Y^2$. He shows the division algorithm at work, and notes that if we swap $f_1$ and $f_2$ around so $f_1=Y^2-1$ and $f_2=XY-1$, we get a different $R$. Then he says the remainder can vanish in one ordering and not another. To me this seems to defeat the whole point.
Immediately after he gives a proposition, if $\mathfrak{a}=(g_1,\ldots ,g_n)$ where $\{g_i\}$ is a Grobner basis for $\mathfrak{a}$ then $f\in\mathfrak{a}$ if and only if the remainder of $f$ is zero after performing the divison algorithm with the Grobner basis.
Well, this is confusing. We don't need to care at all about grobner bases to be sure either $f\in\mathfrak{a}$ or $f\not\in\mathfrak{a}$, it's obvious exactly one is true. Yet he claims the order of the basis matters when we compute the remainder, and some orders may have $R$ vanish while others don't. How is this at all compatible with the proposition? Either I am overlooking something, or something has be left implicit in the book, and I can't see what.
In general, it's not obvious which of $f \in \mathfrak{a}$ or $f \notin \mathfrak{a}$ is true — it can, in fact, be a very difficult calculation to prove one way or another.
Gröbner bases are the best general computational method for settling such questions, because if you have a Gröbner basis, you just run the division algorithm to settle the issue.
The same is true if you want to pick a unique representative out of every coset of $I$.