When I think of normalizing a vector I mean divide each element with the absolute value of the whole vector, i.e. \begin{align} a &= (2, 4, 3, 1) \\ \hat{a} &= \frac{(2, 4, 3, 1)}{\sqrt{30}} \approx (0.37, 0.73, 0.55, 0.18) \end{align} However, I'm reading a programming book (Java) and it says:
"You have a sequence of real numbers and want to return a new normalize sequence whose sum is equal to one. This can be done by dividing each number in the sequence with the total sum of the sequence. For example; $(2, 4, 3, 1)$ should return $(0.2, 0.4, 0.3, 0.1)$ since the sum is $2 + 4+ 3 +1 =10$."
But why divide with $10$ and not the absolute value?
What have I missed?
Is there a different terminology for "normalization" in mathematics and computer science, respectively?
Of is there actually a fundamental difference between a vector, an array and a sequence?
"Is there a different terminology for "normalization" in mathematics and computer science, respectively?"
Yes absolutely. What normalization actually means can vary from problem to problem. By normalize we mean that the resulting normalized vector $\tilde{x}$ should have a norm of 1, $ \lVert \tilde{x} \rVert = 1$. This, in general, can be achieved through $$ \tilde{x} = x / \lVert x \rVert $$ However, what the norm $\lVert x \rVert$ computes is defined (or chosen) by the domain you are working in. A norm is actually just a mapping to the nonnegative reals that in addition satisfies som rules, see Definiton of norm. In physics the by far most common norm is the Euclidian norm $$ \lVert x \rVert_2 = \sqrt{\sum_{i=1}^n x_i^2}, $$ such that to normalize with regards to the Euclidian norm refers to the operation $$ \tilde{x} = x / \sqrt{\sum_{i=1}^n x_i^2}. $$ In computer science the Manhattan norm $$ \lVert x \rVert_1 = \sum_{i=1} \lvert x_i \rvert, $$ is also commonly used. I suspect that this is also the norm your book is using. The norm should be defined explicitly or by context as the act of normalization depends on the norm.