I am working on a basic self driving algorithm using a monocular image sequence. For this, the optical flow between every two frames based on tracked keypoints is calculated (which is a vector for each keypoint, denoting its most probable movement).
This gives a vector for each keypoint but this vector consists of the rotational component and the translational component without making a distinction. The next step in the algorithm requires the optical flow of which the rotational components are removed (so only the translational component remains). I have been able to get the translational and rotational component of the camera itself between the two images using a bundle adjustment algorithm and the camera matrix; but now I need to link this to the flow field.
This brings me to my question: I have a seperate translational and rotational component for the camera movement in three dimensions. And I have a flow field with vectors that denote the most probable pixel movement between two subsequent images in two dimensions. How do I get a rotation vector in the image plane from a rotation matrix in three dimensions?