In logical systems (I'm specifically in the context of language of first order logic) it is possible to extend a theory by adding definitions to compactify/simplify formulas and theorems. It can be proven that, under appropriate (intuitive) conditions these extensions of the language are conservative in that they don't allow for the deduction of any essentially new formulas.
I would like to find a modern reference that demonstrates this proof in a natural deduction context (especially avoiding model theory). Kleene, Introduction to Metamathematics (1974) provides a proof of this that I believe satisfies what I am looking for. However, the reference is a bit old and the specific proof I'm looking for is scattered throughout the book. I'm hoping to find another, ideally more modern, reference which is readable and presents the proof clearly.
Life is a little bit harder if you want to define functions. This is something that certainly ought to appear in textbooks, but seems to get brushed aside. There is some relevant information in Mendelson's Introduction to Mathematical Logic (see "definition, possible" in the index) or Boolos, Burgess and Jeffrey's Computation and Logic (see discussion of elimination of names and function symbols). If you only need to deal with functions that are uniquely determined by their arguments, then a syntactic proof of conservativity can be given but it involves contextual transformations. Dealing with loosely defined functions requires semantic methods or some chicanery.
[Aside: my versions of the cited books are ancient hence I haven't given page or section references as they will be out of date.]