I'm not particularly well-educated in mathematics. I've been reading about the, 'curse of dimensionality,' and how distance metrics become meaningless in high-dimensional spaces.
I then thought: What if you treat each dimension in a high-dimensional vector as a 1D scalar, and find the vector with the nearest scalar value for the same dimension. Then, you add the index for this, 'single-dimension nearest neighbor,' to a new vector. You iterate through this process for every dimension in the vector and in the end, you define the nearest neighbor as the vector that appears most often in the vector of single-dimensional nearest neighbors.
Is this utter nonsense from a mathematical perspective? I have no idea how to even begin analyzing the efficacy of this mathematically.
When you make a definition like this there are two criteria of interest. The first is whether it is well defined-is it clear what vector it will choose as the nearest neighbor. The second is whether it is useful. I think well defined fails here. If you choose one vector out of a group of $10,000$ in $100$ dimensional space, it sounds like you will find the vectors that are closest in each coordinate direction to the chosen vector and take a plurality vote among them. If you have many more vectors than dimensions, the nearest vector in each dimension may be different, so each of them gets one vote. Which is the nearest? Maybe you just want to make a list of the several nearest vectors. That will clearly be well defined.
Whether it is interesting depends on whether the vector or vectors you select as nearest neighbor(s) are similar to the original one in the sense of interest to you and your community. I can't comment on that.