DISCLAIMER: I would like to apologize beforehand for asking this stupid question. Somehow I have a feeling that I should know this, even with basic math. Some of the examples I will give are probably more code based, in that case do not know the math behind it. Sadly I'm not too confident with my math skills.
I'm running into problems when I want to combine X,Y,Z coordinates. These coordinates come from Kinect readings, where the Kinect is the, (0,0,0), center point. The camera's focal point will be the null line. Meaning, everything left would be X-, right X+. The same counts for the Y space. Z is the depth vector in this case.
The Kinects are tilted forward at a 15° angle. I automatically calculate this angle based on the floor X,Y,Z,W that is returned by the Kinect:
rad $= \tan^{-1}\left(\cfrac{F_z}{F_y}\right)$
When I start the simulations; The current user angle will be determined based on a straight line between the shoulders. Ofcourse this value is seen from the camera's perspective:
rad $= \tan^{-2}\left(\cfrac{L_z}{L_x}\ \cfrac{R_z}{R_x}\right)$
I figured that knowing these two values for each camera would be enough to 'normalize' the coordinates, so that the values can be averaged. But nothing I've tried helped me out. I'm really lost in the math right now.
Example where the 35° is calculated based on the second formula:
Example http://ricardoismy.name/ExampleTopDown.png
As you can clearly see, the left hand of the person will seem closer to the left Kinect. I want to convert these relative values to world-space coordinates.
I was hoping some of you geniuses might be able to help me out.