I have a set of $d$-dimensional vectors $V = \{+1, 0, -1\}^d $. Then $P(V)$ constitutes the power set of $V$. I now construct a set of unit vectors $V_{sum}$ from the power set $P(V)$ such that $$ V_{sum} = \left\{\frac{\bar{v}}{\|\bar{v}\|} \quad \Bigg| \quad \exists S \in P(V),\quad \bar{v} =\sum_{v \in S} v \right\} $$ That is, each subset $S \in P(V)$ contributes to a vector in $V_{sum}$ formed as a sum of all the vectors in the subset $S$ and then taking the unit vector in that direction.
Note that there could be duplicates. For example, for $d = 3$, the vector $(\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}})$ can be formed as a sum of vectors of any of the following subsets $$S_1 = \{(1,0,0),(0,1,0),(0,0,1)\},\\ S_2 = \{(1,1,0 ),(1,0,1),(0,1,1)\},\\ S_3 = \{(1,1,1)\}.$$
and many more possibilities.
Now I want to find the maximum of Euclidean distance between any vector in $V_{sum}$ to its closest vector in $V_{sum}$. Is there an easy way to upper bound this max distance? (Ignoring the zero vector in $V_{sum}$).
In other words, if I consider $V_{sum}$ to be an $\epsilon$-net to the surface of the unit ball in $d$-dimensions, then I want to find an upper bound on $\epsilon$. Any weak upper bound on $\epsilon$ should suffice. The goal is to show that $V_{sum}$ forms a better $\epsilon$-net than the unit vectors formed from the vectors in $V$.
A partial answer. Since I find myself being too distracted by other things to push it any further, I decided to put out what I've come up with so far in hopes it might help.
Since the set consists of unital vectors, the inner products are the cosines of the angles between them. So the smaller the distance between any two vectors, the larger the inner product. So the find the closest neighbor to a vector, one must look for the other vector that maximizes the inner product with it. This maximum is necessarily $< 1$. If we didn't need distinct vectors, the maximum would be $1$, the inner product of a vector $v$ with itself. For distinct vectors, the inner product is maximized for vectors with coordinates as near to those of $v$ as possible. But to maintain a norm of $1$, if some coordinate decreases from that of $v$, there must be some other coordinate that increases. I have not yet investigated whether you can get closer by concentrating these changes in as few coordinates as possible, or spreading them out over as many coordinates as possible, but surely one or the other approach will give the closest other point.
Then to find the maximum distance between two closest neighbors, you need to look for vectors for which the changes between neighbors are necessarily as big as possible. This will occur when the coordinates of your first point are as low as possible. I suspect (but again, have not confirmed) that vectors with a single non-zero coordinate will have the farthest neighbors. The reason is the relative change from $0$ to $1$ is bigger than the relative change from $d$ to $d + 1$ for larger $d$.