I cant think of a thing that I think is supposed to be easy... =/ Im glad if you could help me.
Im working with a regular discretization of a 3d euclidean space. Cubic cells. Then, after a black-box-like process, I have a unit vector that is related to a specific cell. I know that it must be pointing to any of the 26 neighbors (9 in upper and bottom level, 8 in the same). But, which cell is it pointing?
In 2D, using square cells, was easy to develop a code to do that... but I cant think how it should be in 3D.
Im solving a BVP of a 3D Elliptic PDE with finite difference scheme. After I solve it, since in the specific PDE Im using I have a property of just having on local minimum, which is the global minimum, I wanna trace a path from a random point, going along gradient descendant lines, until the global minimum. So, the discretization arises because of FDS, and my vector field are just the inverse of the gradient field (ie, gradient descendant) of my solved BVP/PDE.
Some formalities that I forgot and DavidK pointed out.
Let $$v \in R^3, \quad with \quad ||v||=1$$ be a vector.
Also, consider a regular discretization of a region of R^3, with $$ \Delta_x = \Delta_y = \Delta_z = 1 $$ Ie, cubic cells, with 0=(0,0,0) being its initial point.
Example1: Lets consider one cubic-cell inside the domain. Without loss of generalization, let it be the closest one to the origin, ie, since the one centered at (0.5,0.5,0.5) is in the border, lets choose the one centered at (1.5,1.5,1.5).
If v=(1,0,0), it is pointing to the cell centered in (2.5,1.5,1.5).
If v=(2^(1/2),-2^(1/2),0), it is pointing to the cell centered in (2.5,0.5,1.5).
I can think of a few things that might be useful to do.
If you can easily compute the reversed gradient vector at the center of your current cell, it might not point directly to the center of an adjacent cell, so you might want to try to find which cell's center it most "nearly" points toward.
Suppose you are currently "at" the cell centered at $(x,y,z)$ and the reversed gradient vector at $(x,y,z)$, scaled to a unit vector, is $\hat v = (\hat v_x, \hat v_y, \hat v_z).$
One approach might be to predefine a set of $26$ unit vectors, each of which points directly at the center of a neighboring cell. The examples you gave, $(1,0,0)$ and $\left(\frac1{\sqrt2},-\frac1{\sqrt2},0\right)$, are two such vectors, pointing to $(x+1,y,z)$ and $(x+1,y-1,z)$, respectively; $\left(\frac1{\sqrt3},\frac1{\sqrt3},\frac1{\sqrt3}\right)$ is such a vector pointing to $(x+1,y+1,z+1)$. In fact, you can start with the $26$ center-to-center vectors of the form $w = (w_x,w_y,w_z)$ with $w_x,w_y,w_z \in \{-1,0,1\}$ (excluding the vector $(0,0,0)$) and scale each one appropriately (divide the coordinates by $\sqrt2$ or $\sqrt3$).
Out of those $26$ vectors, the vector $\hat w$ that produces the largest value of $\hat v \cdot \hat w$ is the one "closest" to $\hat v$ (measured by the angle between the vectors). You could choose that vector as the one pointing to the same cell as $\hat v$, and by "unscaling" the vector back to the vector $w = (w_x,w_y,w_z)$ in the same direction whose coordinates are all in the set $\{-1,0,1\}$, you can say that $\hat v$ is pointing at the cell $(x+w_x, y+w_y, z+w_z)$.
You don't actually need to take $26$ dot products every time, because simply checking the signs (positive or negative) of $\hat v_x,$ $\hat v_y,$ and $\hat v_z$ you can eliminate all but seven of the possible $\hat w$ vectors. You can narrow this down even further by considering whether $|v_x|>|v_y|$ and so forth, but that may be overkill.
[Edit: I should emphasize that there can be two vectors in each of the $26$ directions. For example, the vector from $(x,y,z)$ to $(x+1,y-1,z)$ is $w=(1,-1,0)$, but in order to decide whether to go in that direction you take a dot product with the unit vector $\hat w=\frac{w}{\|w\|}=\left(\frac1{\sqrt2},-\frac1{\sqrt2},0\right)$ in the same direction. If you take dot products with unequal vectors in the $26$ directions then the results will be biased toward the longer vectors. On the other hand, if you use the negative gradient $v$ instead of the unit vector $\hat v$ in all the dot products, they will all be scaled by the same factor, $\|v\|$, so no bias is introduced. It is therefore not strictly necessary to compute $\hat v$ so that you can use it instead of $v$.]
[Another thought is that by traversing a suitably-constructed decision tree whose decisions are based on comparisons of coordinates of $v$ such as $v_x > k_1v_y$ or $v_x + v_y > k_2v_z$, I think you can determine which of the dot products would be greatest without actually having to compute any of them. But this would likely be a lot of work on your part for little benefit.]
A second possible approach is: if $\hat v_x < -\frac12$ then set $w_x = -1$, if $\hat v_x > \frac12$ then set $w_x = 1$, and otherwise set $w_x = 0$; similarly use the values of $\hat v_y$ and $\hat v_z$ to set $w_y$ and $w_z$; then say that $\hat v$ is pointing at the cell centered at $(x+w_x, y+w_y, z+w_z)$. What you would actually have determined, however, is the cell in which the point $(x+\hat v_x, y+\hat v_y, z+\hat v_z)$ lies. Given a vector $\hat v$ in a completely random direction, I think this is more likely than the previous method to end up "moving" you in the direction $(\pm1,0,0)$, $(0,\pm1,0)$, or $(0, 0, \pm1)$. You may not want that.
A third approach is to determine which coordinate of $\hat v$ ($\hat v_x$, $\hat v_y$, or $\hat v_z$), is the largest, and scale $\hat v$ by a positive factor $a$ so that the absolute value of that coordinate becomes $1$. This puts the point $(x + a\hat v_x, y + a\hat v_y, z + a\hat v_z)$ on the surface of a $2\times2\times2$ cube centered at $(x,y,z)$; use a technique similar to the one in the "second approach" (if $a\hat v_x < -\frac12$ then set $w_x = -1$, etc.) to decide which cell's center this point is closest to.
Actually, with the "third approach" you do not even need to scale the reversed gradient to a unit vector first; you can scale it directly to a vector whose largest coordinate has absolute value $1$ and proceed from there.
The fourth approach is not to try to find which cell the reversed gradient vector points to, but simply visit cells adjacent to the cell centered at $(x,y,z)$ and choose the one where the least value of the function is found. This is a standard "hill-climbing" strategy (except going down to a minimum rather that up to a maximum in this case). You might or might not want to use the gradient to rule out directions to search in; assuming the value in each cell has already been computed, it is just a matter of visiting each one and choosing the best value. (If you remember the previous cell in the search, you may be able at least to skip cells adjacent to that cell, since if they were better than your "current" location they would already have been selected.)