This is a question related to the book "The Mathematics of Poker" by Bill Chen and Jerrod Ankerman. The author is explaining how to calculate the standard deviation per 100 hands based on a sample of 16,900 hands taken from a distribution with a mean win rate of 1.15BB/100h and a standard deviation of 2.1 BB/h (notice here for some reason it is 2.1BB/h not per 100h). I understand up to the step where he calculates the standard deviation of the 16,900 sample by multiplying the standard deviation of the underlying distribution by the square root of 16,900. However i don't understand the final step where he calculates the standard deviation per 100 hands and he divides by 169 ( i would have expected he needed to divide by the square root of 169).
What important principle am i missing here. Also how is it possible that the standard deviation per hand is 2.1BB/h and the standard deviation of 100h is smaller at 1.61?

My guess is the calculations in the various of units (big blinds per hand?) are:
The real message here is that the average gain in a single hand is very small compared to the uncertainty in a single hand, so much so that you cannot be confident that the total gain of $194$ big blinds in $16900$ actual hands showed that expected gain per hand is positive.