Below is a proof from David Burton introductory text on number theory that I was reading.

Here the proof assumes that the smallest integer in the set S is r and then proceeds to prove that $0<=r<b$. My idea is that using the two conditions, a=bq+r and $0<=r<b$, we have to lock down on the uniqueness of q and r. But in the proof, through the assumption that r is the least element of S, we show $0<=r<b$. Why can we make this assumption that r is the least element in the set S? Why should this hold true?
The idea behind the proof is that one subtracts $b$ from $a$ as many times as possible with the condition that what remains be non-negative, i.e. be in $\mathbf N$. As a fundamental property of the natural numbers is that any non-empty subset has a smallest element, you're sure such an $r$ exists.