Given a field $K$, let $U = K[X] \times \mathbb{N}$. Identify each $k\in K$ as $(X-k,1) \in U$, so $K \subseteq U$. Consider fields $(S,+,\cdot)$ where $K \subseteq S \subseteq U$, and the inclusion map $K \hookrightarrow S$ is a field homomorphism such that $S:K$ is algebraic. Partially order these fields by $(S_1,+_1,\cdot_1) \leq (S_2,+_2,\cdot_2)$ iff $S_1\subseteq S_2$ and $S_1 \hookrightarrow S_2$ is a field homomorphism. Zorn's lemma is not needed to show this is a partial order, I think.
For any chain $(S_i,+_i,\cdot_i)_{i\in I}$, an upper bound is given by $(S,+,\cdot)$ where $S=\bigcup_{i\in I} S_i$ and $+,\cdot$ are defined per pair of elements in $S_i$, for example define $\alpha + \beta$ to be $\alpha +_i \beta$ if $\alpha,\beta\in S_i$. This is independent of $i$. I don't think Zorn's lemma is needed to show this is indeed an upper bound either...
Finally, apply Zorn's lemma to get a maximal element which is automatically an algebraic closure of $K\subseteq U$.
Question What's wrong with this proof? On the several accounts where I thought Zorn's lemma is not needed, have I been wrong, and why?
Although stated as "what's wrong with this proof?", the larger question seems to be how to fix it up, since rather than a fallacious conclusion (as with most "fake-proofs"), the situation here seems to be a bogus proof of a valid result (every field has an algebraic closure).
So I'm going to begin by pointing out a major problem with the proposed proof, and then explain how the difficulty is often circumvented. Although the identification of field elements $k \in K$ the base field with elements $(X-k,1)$ in $U = K[X] \times \mathbb{N}$ seems unobjectionable as a 1-1 mapping, it doesn't explain how some field extension $S:K$, even an algebraic one, might be similarly mapped into $U$ (or in another hinted version, into the power set of $U$).
Indeed the natural ring structure on $K[X] \times \mathbb{N}$ does not seem even to help with realizing the inclusion of $K$ into $U$ as a ring homomorphism. For example, if $k_1,k_2 \in K$ are mapped respectively to $(X-k_1,1),(X-k_2,1) \in U$, there seems to be no obvious rationale for why $(X-k_1,1)+(X-k_2,1) = (X-(k_1+k_2),1) \in U$, much less why $(X-k_1,1)*(X-k_2,1) = (X-(k_1 k_2),1) \in U$.
At this point the role of the second coordinate in $U$ seems largely inexplicable, except that some sort of bookkeeping might be necessary to distinguish roots of higher degree polynomials (as typical "algebraic over $K$" objects).
We do well to reflect on the purpose this machinery presumably serves, namely to put various (algebraic) field extensions of $K$ in some common set where chains of inclusions may be unioned to form a maximum of the chain (then handing off the conclusion to Zorn's lemma, existence of an algebraic closure). Once the existence of said closure is available, the pursuit of Galois theory, treating any two algebraic extensions of $K$ as if they are both subfields of a "universal" extension, becomes easier. But we must resist the temptation to fall into a trap of circular logic, using an algebraic closure to justify operations which lead to existence of an algebraic closure.
Standard treatments of Galois theory, such as Kaplansky's Fields and Rings, will illuminate ways to proceed. In the present context it does seem promising to try associating an irreducible (over $K$) polynomial with each element of an algebraic extension $S$ of $K$, or perhaps the ideal generated by such an irreducible polynomial. After all, included among the beginning steps of Galois theory are mostly constructive demonstrations that the sum of two algebraic elements (likewise their product) is again algebraic.