I've read about about higher-order logics (i.e. those that build on first-order predicate logic) but am not too clear on their applications. While they are capable of expressing a greater range of proofs (though never all, by Godel's Incompleteness theorem), they are often said to be less "well-behaved".
Mathematicians generally seem to stay clear of such logics when possible, yet they are certainly necessary for prooving some more complicated concepts/theorems, as I understand. (For example, it seems the reals can only be constructed using at least $2^{\text{nd}}$ order logic.) Why is this, what makes them less well-behaved or less useful with respect to logic/proof theory/other fields?
Unfortunately the term is ambiguous: there two kinds of semantics of higher-order languages, and only one is problematic. Consider the language of second-order arithmetic, where there are quantifiers over both natural numbers and sets of natural numbers.
First is what Quine called "set-theory in sheep's clothing": this is where quantification over sets of natural numbers is defined to be over all the sets of number that can be posited. It's the theory we use when we prove that there can be only one complete, totally ordered field. It isn't really a logic, there's no complete notion of proof formalisation for it. Wikipedia calls this "standard semantics"; I'm not sure if there is an authority for this.
Then there's Henkin semantics, which uses the rule analogous to the lambda-calculus to define a semantics for second-order quantifiers. This can be seen as still in the realm of first-order logic, in that sense that the second-order system can be translated into a first-order system conserving provability. This is how second-order arithmetic is defined.
All the theorems of the Henkin semantics will be "theorems" in the first.