I am having trouble understanding how to convert (and what to convert) for my ray tracing program. At the moment I only have 3D to 2D projection transformation which is used to project the 3D world onto the 2D screen. However, it doesn't seem to be working because I can logically predict which pixels should have colour data and which shouldn't, for a simple case. (e.g: Make a 2x2 viewport and a sphere which is larger than the viewport and is close to it. I can expect all pixels to have data since there will be a definite intersection with the object, yet I don't get any.)
The transformation I am using is:
$$x_{scene} = \frac{z_{max} * x_{pixel}}{D}$$ $$y_{scene} = \frac{z_{max} * y_{pixel}}{D}$$ $$z_{scene} = z_{max}$$
Where $z_{max}$ is the depth of the scene, $D$ is the distance from the viewport (screen) to the observer. I am using this transformation for the $x,y,z$ co-ords for each ray that passes through the viewport (i.e: one ray per pixel), to see where this ray is in the scene
Is this correct? What other transformations am I missing out?