I found the following statement in my old notes: to prove that a function is $\mu$-recursive, we never need to use the $\mu$-operator more than once. Why is this true (if this is even true)? What if the $f$ from below is itself defined using $\mu$-operator? Then if we need to define $g$, we would be using the $\mu$-operator more than once.
if $f:N^{k+1}\to N$ is $\mu$-recursive, then so is $g:N^k\to N$ defined by $g(x_1,\dots,x_k)=\mu y. (f(x_1,\dots,x_k,y)=0)$.
The most natural way to prove this, in my opinion, is via the "syntactic" approach to computability. Specifically, use the following results:
(Interestingly the converse of this fails since we can diagonalize out of the $\Delta^0_0$ functions in a primitive recursive way.)
Now suppose $f$ is $k$-ary and $\mu$-recursive. Let $\psi$ be as guaranteed by Fact $2$. Let $\pi_0,\pi_1$ be the left and right projections corresponding to the Cantor pairing function, so that $a=\langle \pi_0(a),\pi_1(a)\rangle$ for each $a$, and consider the map $$g(x_1,...,x_k)=\mu a. (t_\psi(x_1,...,x_k, \pi_0(a),\pi_1(a))=0)$$ (which per Fact $1$ only uses one instance of $\mu$ since $t_\psi$ is primitive recursive). The function $f$ is then $\pi_1(g(x_1,...,x_k))$.
Of course, in order to do this you have to prove Facts $1$ and $2$ in the first place, and this is nontrivial. However, it's absolutely worth doing: the syntactic approach to computability cannot be over-valued. In particular, the converse of Fact $2$ is used all the time in logic.