Looking to Better Understand Conversions to Unit Vectors via dividing by magnitude

21 Views Asked by At

I understand that in order to scale a vector to its unit vector, we divide the vector by its magnitude (which is performing scalar multiplication to rescale the vector to having a magnitude of 1). I get that the algebra will work out, but it still feels non-obvious to me why this operation results in a unit vector each time. Can someone provide me an intuitive explanation that will help make things 'click' for why dividing by the magnitude results in a unit vector?

Thank you very much in advance.

1

There are 1 best solutions below

0
On

Intuitively, the reason the conversion

$$\vec v \longrightarrow \frac{1}{\|\vec v\|} \vec v$$

gives a unit vector is because you're shrinking it down by a factor precisely equal to that vector's norm. (You may be taking a vector of length $2$, for instance, and shrink it down by a factor of $1/2$; a vector of length $1/3$ and triple its length; and so on.)

Algebraically, then, for a (nonzero) vector $\vec v$ with norm $\|\vec v\|$, we have

$$\left\| \frac{1}{\|\vec v\|} \|\vec v\| \right\|= \frac{1}{\|\vec v\|} \|\vec v\|=1$$

i.e. the magnitude of the new vector is $1$.

(This uses the property that $\|\alpha \vec v\| = |\alpha| \cdot \|\vec v\|$ for any scalar $\alpha$; note that norms are always nonnegative scalars, so we can drop the absolute values on that scalar.)