Why does $ROUNDUP(x, pow2) = ((x + (pow2 - 1)) \ \&$ ~$(pow2 - 1))$ round up to the next integer in pow (where pow2 must be a power of two ) ? For example, if x = 13, pow = 4 or pow2 = 4*4 = 16 , then the function returns 16
In programming, a $2^m$ is represented as the binary: 00...00100...00, with m 0's following a single 1.
Subtract 1 from that and you get 00...000111...11 with m 1's at the end.
The binary negation ~$(2^m-1)$ is then: 111...111000...00 with m 0's at the end.
So these two quantities $(2^m-1)$ and ~$(2^m-1)$ represent bit flags for "everything less than $2^m$ and everything greater than $2^m$ respectively. Lets call them L and G.
So now consider the case where $x < 2^m$. For example let x=5=000..0101 then x+L will be 000..000111..11 + 000..00101. The left most 1 of x will carry all the way up through all the consecutive 1's of L until it reaches the spot for $2^m$. You then bit-and that with G and you get $2^m$
If $x > 2^m$ then you could split it as $x=k * 2^m + y$ with $y<2^m$. The previous logic works for the y part, and the bit-and with G preserves the $(k+1) * 2^m$ that results.
Could anyone elaborate more on how it ended up with $(k+1) * 2^m$