so I see that there are a lot of answers for the problem of getting a camera projection given a camera coordinate and direction in the world space. But what if you do the opposite?
I have a world space that I’m transforming onto a screen based on three unit vectors, determined by angles $a$ and $b$.
$u = ⟨cos(a), sin(a) · cos(b)⟩$
$v = ⟨cos(a + 0.5π), sin(a + 0.5π) · cos(b)⟩$
$w = ⟨0, sin(b)⟩$
Then to project a point $(x, y, z)$ onscreen, I multiply each coordinate by the first component of each vector to get the screen $x$ coordinate, and the second component of each vector to get the screen $y$ component. But now I need to calculate z-depth—the distance of each point from the camera. I know to get this you have to project the coordinate of each point onto the camera vector, but how do I get the camera vector? Please note that I’m not very good at matrices…