Context of this question is projection perspective of principal component analysis, suppose we have orthonormal basis vectors $b_1, ... , b_m$ of the principal subspace where we project a data point $x_n \in \mathbb{R}^D$ onto subspace $U \subseteq \mathbb{R}^D$ and $\text{dim}(U) = M$, and the projection of $x_n$ onto $U$ is denoted by $\widetilde{x}_n$ such that the euclideann distance between the two points is the minimum.
If we can represent $\tilde{x}_n$ as linear combination of $B = [b_1, ..., b_m]$ as: $$\tilde{x}_n = \sum_{m=1}^M z_{mn}b_m$$ Suppose that we know that $z_{in} = {x_{n}}^\intercal b_i = {b_i}^\intercal x_n$, so now we can write: $$\tilde{x}_n = \sum_{m=1}^M z_{mn}b_m = \sum_{m=1}^M ({x_{n}}^\intercal b_m)b_m $$
My book says that exploiting the symmetry of the dot product we can write: $$\tilde{x}_n = (\sum_{m=1}^M b_m {b_m}^\intercal)x_n$$
But I don't understand how? how can we take $x_n$ outside of the summation like this?
We have $$ \sum_{m=1}^M (x_n^\top b_m)b_m = \sum_{m=1}^M b_m(b_m^\top x_n) = \sum_{m=1}^M (b_mb_m^\top) x_n = \left(\sum_{m=1}^M b_mb_m^\top\right) x_n. $$