What is the difference between integer valued and real valued vectors, in the mathematics and programming senses?
For example, since certain binary operations on vectors, such as "angleBetween()" return a real, does that mean that the type of integer valued vectors is somehow smaller than the type of real valued vectors? I'm looking for a formal discussion of how these types differ.
I see that the programming language R (or S) defines an integer vector type.
From a programming perspective, the difference between vectors of integers and vectors of floating point numbers is very much the same as the difference between integers and floating point numbers.
Integers are very easy for computers to handle; numbers involving decimals less so. If you were to dig under the hood of your favorite programming language, you'd find that even the most basic operations (addition/subtraction/multiplication/etc.) are handled differently for the two types - they are even stored differently in memory.
In a theoretical context, there is still a massive amount of difference between vectors of integers and vectors of reals. For instance, vectors of integers don't form a vector space - because they are not closed under scalar multiplication by elements in $\mathbb{R}$ (or $\mathbb{C}$). Further, there is a very real sense in which there are "fewer" vectors of integers: even though bouth $\mathbb{Z}$ and $\mathbb{R}$ are infinite, $\mathbb{Z}$ is countably infinite whereas $\mathbb{R}$ is uncountably infinite. The same can be said of $\mathbb{Z}^n$ vs $\mathbb{R}^n$.