Are constant symbols really necessary in first-order theories? And I am not talking about replacing them with unary predicates like here, but instead I would like to remove any sign of their existence from the language and only hint to their existence using the axioms.
Take for example the Peano axioms. Is the zero symbol $0$ really necessary? Assume $\varphi(x)$ is an unary predicate and $\varphi(0)$ is an axiom of PA containing the symbol for the zero. Couldn't I just replace it by the following axiom?
$$(\exists x)\varphi(x) \quad\wedge\quad (\forall x \forall y)[\varphi(x)\wedge\varphi(y)\rightarrow x=y].$$
The first part states that there is an individual satisfying this predicate, the second parts states that it is unique. I can think about ways to generalize this to several and even countably infinite many constant symbols.
If it is about simplicity, then why doesn't ZFC uses a constant symbol for the empty set? Just for historical reasons?
The trick you outlined in the question has some drawbacks as the example of natural numbers already shows: consider the two formulas $$\varphi_1(z) \equiv \forall x\ s(x) \ne z$$ $$\varphi_2(z) \equiv \forall x\ x+z = x$$ we know that $\{\varphi_1(0),\varphi_2(0)\}$ is a sub-theory of PA, if you try to replace this two axioms in the way you described you get the theory with axioms $$\exists z\ \varphi_1(x) \land \forall y\ \varphi_1(y) \rightarrow y=z$$ $$\exists z\ \varphi_2(x) \land \forall y\ \varphi_1(y) \rightarrow y=z$$ that is not equivalent the previous one, that's because you could have models of this theory having two distinct elements satisfying separately the two axioms.
You could solve this problem by taking the the axiom $$\exists z \varphi_1(z) \land \varphi_2(z) \land \dots$$ but this doesn't help when you have infinitely many formulas $\varphi_i(z)$.
While I don't have a proof at the moment, I'm afraid that you cannot eliminate constants from the language unless you add unary-predicate symbols to the language with some axioms that allow to obtain an equivalent theory.
On the other hand constants are incredibly useful because they allow to build Henkin theories that are theories $T$'s such that for each formula $\varphi(x)$ if $T \vdash \exists \varphi(x)$ then there's a constant $c$ such that $T \vdash \varphi(c)$.
These theories are the key instrument for proving the completeness theorem: indeed it's easy to show that for an Henkin consistent theory the term model, that is the structure build out of the terms quotiented up to provable equality, is actually a model of the theory.
But that's not all of it, lots of constructions in model theory make use of constants. For instance for every structure $M$, in a given language $L$, it's possible to define the elementary diagram which is a theory in a language $L'$ obtained by $L$ adding a constant for each element in $M$, the models of this theory are basically the same as the elementary extensions of $M$.
There are other applications of constants inside logic but I hope that these examples gave you an idea of why constants matter.