I am working problems in Hindley and Seldin. The -reduction for this formula eludes me at a certain step, because I am having trouble understanding the ordering of function application. Yes, the applications associate to the left - but what order should the applications be performed in? Should, for instance, the most deeply nested applications be performed first?
(λxyz.xz(yz))((λxy.yx)u)((λxy.yx)v)w
The latter two terms trivially reduce, yielding:
(λxyz.xz(yz))(λy.yu)(λy.yv)w
At this point, Hindely and Seldin note 3 contractions get you to:
(λy.yu)w((λy.yv)w)
Now, I can't tell what sequence of contractions moves the w into the center. And Hindley and Seldin don't say. If someone could please assist here, and let me know whether for instance w should be substituted into (λy.yv) first, or whether (λy.yu) should be substituted into (λxyz.xz(yz)), and what the principle is behind the sequencing of substitutions, as I don't think it's been elucidated yet in Hindley + Seldin (though perhaps I missed something in their exposition)?
It's in the remark 1.29 right after that exercise. There are as many beta-contraction possibilities as there are redexes in a term, so as soon as a term contains more than one redex, several reductions (= series of contractions) are possible, and all of them are legitimate.
In principle it doesn't matter which one you pick, in the sense that if a term can be reduced to two different terms, then these two terms can always be further reduced to one same term. This property is called confluence, for the lambda calculus also known as the Church-Rosser theorem, and essentially means that the end result of a function application does not depend on the calculation path:

However, while different reductions will never yield different end results in the sense of different normal forms (= terms which can be reduced no further), it may happen that some reductions get stuck in an infinite loop and never lead to a normal form at all, while other reductions with the same starting term succeed. The quasi-leftmost reduction theorem states that if it is at all possible to reach a normal form, then a reduction in which at least every couple steps it is the leftmost redex (which is the one with the lambda furthest on the left) which is contracted will lead to success. This entails that a strictly-leftmost reduction, in which in every step the leftmost redex is contracted, will also work.
So if you want to be on the safe side, simply always choose the leftmost redex for the next contraction step.
In the present example, a strictly-leftmost reduction proceeds as follows (in each line, the redex to undergo contraction in the next step is underlined, and the term obtained from contraction in the previous step is overlined):
$\newcommand{\bred}{\triangleright_\beta} \phantom{\bred\ } \underline{(\lambda xyz.xz(yz))(\lambda y.yu)}(\lambda y.yv)w\\ \bred \underline{\overline{(\lambda yz.(\lambda y.yu)z(yz))}(\lambda y.yv)}w\\ \bred \underline{\overline{(\lambda z.(\lambda y.yu)z((\lambda y.yv)z))}w}\\ \bred \overline{\underline{(\lambda y.yu)w}((\lambda y.yv)w)}\\ \bred \overline{wu}\underline{((\lambda y.yv)w)}\\ \bred wu\overline{(wv)} $
But any other reduction strategy will get you there as well.