So in the textbook they have the following theorem:
With the following proof:
What I don't understand is why they extend ${\textbf{u}_1, ..., \textbf{u}_r}$ to an orthonormal basis. In fact, if you just define $$U = [\textbf{u}_1, ... , \textbf{u}_r, \textbf{0}, ..., \textbf{0}]$$
The answer should be the same...? Ie. $U\Sigma = AV$


I have chosen to turn my brief comment into a more elaborate answer. If I understand the question correctly, we should really answer why it is interesting that there exists an ortogonal $U$ as opposed to just an arbitrary $U$ that satisfies the equality $U \Sigma = AV$.
First of all, the $U$ suggested in the question is singular, as pointed out in the comments. That choice of $U$ would therefore make the theorem alot more "one-sided" in the sense that you would not have $U^T A = \Sigma V^T$. When both $U$ and $V$ are invertible, the theorem is more "symmetric" and easier to utilize.
Secondly, when both $U$ and $V$ are ortogonal, the singular value decomposition has a nice interpretation. By finding such a decomposition, we can better understand the map $x \mapsto Ax$. Since $A$ is not necessarily neither invertible, nor at all square, it can be difficult to get a grip of what multiplying by $A$ does. The singular value decomposition provides an answer. Since $A = U \Sigma V^T$, we can look at the maps $V^T$, $\Sigma$ and $U$ successively. Here it is really helpful that both $U$ and $V^T$ are ortogonal because multiplication by ortogonal matrices can be thought of as rotations. The multiplication by $\Sigma$ can be thought of as a scaling.
For an even more elaborate answer, please consult the Wikipedia article about SVDs (https://en.wikipedia.org/wiki/Singular_value_decomposition), which explains this interpretation well and provides some illustrative figures and animations.