I have proved many results in commutative algebra that relies on the “simple” idea of localization and using the residue field, but I am still feeling a little uncomfortable using this powerful idea to write proofs. Also there are some simple ideas using the fact that $A/\mathfrak m$ is a field when $\mathfrak m$ is maximal. I will begin with some examples:
Theorem 1. Let $A\subset B$ be commutative rings with identity and let $\mathfrak{p}\subset A$ be a prime ideal. If $B$ is integral over $A$ then there exists a prime ideal $\mathfrak{q}\subset B$ such that $\mathfrak{q}\cap A=\mathfrak{p}$.
Proof: We assume some standard results from integral dependence. First note that $B_{\mathfrak p}$ is integral over $A$, and if $\mathfrak n\subset B_{\mathfrak p}$ is maximal then the contraction $\mathfrak m=\mathfrak n\cap A_{\mathfrak p}$ is maximal in $A_{\mathfrak p}$, and is the unique maximal ideal in the local ring $A_{\mathfrak p}$. Now let $\alpha: A\to A_{\mathfrak p}$ and $\beta: B\to B_{\mathfrak p}$ defined in the obvious way and take the prime ideal $\mathfrak{q}=\beta^{-1}(\mathfrak n)$. Then it is now easy to verify that $\mathfrak q\cap A=\beta^{-1}(\mathfrak n)\cap A=\alpha^{-1}(\mathfrak n\cap A_{\mathfrak p})=\mathfrak p$.
Theorem 2.(Hilbert’s Nullstellensatz) Let $k$ be an algebraically closed field and $\mathfrak a\subset k[x_1,...,x_n]$ be an ideal. Then $IV(\mathfrak a)=\sqrt{\mathfrak a}$.
Proof: If $g^n=f\in\mathfrak a$ then $g\in IV(\mathfrak a)$ since $k[x_1,...,x_n]$ is an integral domain. Now we prove the converse:
Let $f\notin \sqrt{\mathfrak a}$ and take a prime ideal $\mathfrak p$ containing $\mathfrak a$ such that $f\notin \mathfrak p$. Let $A=k[x_1,...,x_n]$. By Zariski’s Lemma intuitively we shall construct an $k$-algebra $C$ in which if we take any maximal ideal $\mathfrak m$ and use $C/\mathfrak m\simeq k$ we must ensure $\mathfrak a$ vanishes at some point but $f$ does not. Then we let $C=(A/\mathfrak p)_f$. $f$ cannot be vanished in $C/\mathfrak m$ or $\mathfrak m$ would be the whole ring. Now take points $(t_1,...,t_n)\in k^n$ to be the image of the generators $x_1,...,x_n$ then we are finished with the proof.
Theorem 3 Let $A$ be a commutative ring with identity and consider the free module $A^n$. Every set of $n$ generators of $A^n$ is a basis of $F$.
Proof: Injectivity is a local property for $A$-module homomorphisms. Thus we can assume that $A$ is a local ring and take $\mathfrak m$ to be the maximal ideal. Take the homomorphism $h: A^n\to A^n$ defined by mapping the $n$-canonical base to the $n$ generators. By condition $h$ is surjective, and we obtain an exact sequence $0\to \mathrm{Ker}(h)\to A^n\to A^n\to 0$. $A^n$ is free thus flat, and tensor $k=A/\mathfrak m$ on the left gives $0\to k\otimes \mathrm{Ker}(h) \to k\otimes A^n\to k\otimes A^n\to 0$ to be exact. $k\otimes A^n\to k\otimes A^n$ is bijective since it is a surjective homomorphism between finite-dimensional vector spaces. Thus $k\otimes \mathrm{Ker}(h)=0$ and $\mathrm{Ker}(h)=0$ since $\mathrm{Ker}(h)$ is finitely generated.(Use Nakayama’s Lemma)
The following is not a “localization” example, but it has a similar “taste” lying in the proof.
Theorem 4.(IBN for Commutative Rings) Let $A$ be a commutative ring with identity. Then $A^m\simeq A^n$ implies that $m=n$.
Proof: Take a maximal ideal $\mathfrak m\subset A$ and it is easy to verify that $(A/\mathfrak m)\otimes A^m\simeq (A/\mathfrak m)\otimes A^n$ are vector spaces over $A/\mathfrak m$ with dimension $m$ and $n$, respectively. Thus $m=n$.
The above results, even Hilbert’s Nullstellensatz, have some “elementary aspects” at first glance, that is, one immediately believe that there should be an elementary proof for them. In number theory one might come up with some simple statament like “Every large even number can be written as a sum of two primes” but having a lot of difficulty trying to prove it. But in Hilbert’s Nullstellensatz for example, “if some system of polynomial equations has solution set $A$, then the polynomials that vanishes on $A$ do not vary to wide from the given polynomial system: it is exactly the $n$-roots of all the polynomials in the system.” It makes one believe there must be an elementary proof that can be naturally found with some observations and efforts on calculating.
I am looking for a perspective to feel comfortable and natural to do proofs of the above taste. Should I start reading some books on “sheaves” or “schemes”(sorry but I even do not know the definition of them) to look at some “geometric pictures” of them? Or should I do a lot of problem sets down to ground in polynomials and abstract algebra to get more familiar with them? It’s kind of off-topic from the title because I extend a little, but what is the correct way to learn commutative algebra?