When working with high dimensional vector spaces, a common method of determining similarity between 2 vectors is using the angle between them as a measure of how near they are to each other. I'm trying to understand if it's possible, or even makes sense, to work in the other direction; that is, to generate the vector that would be nearest to a given vector. Essentially to iterate through all possible N-dimensional vectors, in order based on their cosine similarity.
For example, if you were to iterate the a-z alphabet of strings length 4, such as:
aaaa
aaab
aaac
...
zzzz
and given the vector aaab, its nearest neighbors could be calculated without any further information. Would such a calculation be possible for vectors of arbitrary dimension? The only thing I can come up with would be to compute all possible permutations of N-dimensional vectors, then compute the inner product of each possible vector, with each other possible vector, then sort them by their mutual cosine angle, but this wouldn't be computationally feasible. Is there some other way this calculation can be done, using just 1 given vector, return its set of definitive nearest neighbors in the vector space?
In other words, similar to how you can iterate all possible strings a-z as above, how would you do the same for an N-dimensional vector where values can range from 0.0 to 1.0, as in:
<0.0, 0.0, 0.0, 0.0>
... (all possible other 4-dim vectors in order by neighboring cosine similarity)
<1.0, 1.0, 1.0, 1.0>