I need to take 32-bit unsigned integers and scale them to 16-bit unsigned integers "evenly" so that $0 \mapsto 0$ and 0xFFFFFFFF $\mapsto$ 0xFFFF. I also want to do this without using a 64-bit unsigned integer built-in type.
To give you an idea, going in the opposite direction, the formula for 16-bit to 32-bit scaling is:
U16 y = 0x3290; // example input
U32 x = ((U32)y << 16) + y; // formula implementation
To arrive at the formula in the title, I simply took two points on the line that is the graph of $f$, and calculated the slope and y-intercept, like in Alg 1.
Those are the rules. Good luck!
Note that $$\frac{2^{16}-1}{2^{32}-1}=\frac{1}{2^{16}+1}=2^{-16}\frac{1}{1+2^{-16}}=2^{-16}(1-2^{-16}+2^{-32}-\ldots).$$ So rounding to the nearest integer the following is maybe what you intend:
$$x\mapsto(x - (x \gg 16) + \textrm{0x8000u}) \gg 16$$
That said, you're probably better off with the answer in the first comment: simply shift right. To see why, consider a related problem of mapping the real interval $[0,1)$ to integers in the range $[0,255]$. (For example to convert to eight bits per pixel images.) The best way is not $$x \mapsto \lfloor 255.0\, x + 0.5\rfloor$$ but simply $$x\mapsto \lfloor 256.0\, x\rfloor.$$ The first method assigns half as much to the bins $0$ and $255$ as to the others. The second method equally fills all bins.