Finding object position using calibrated camera and object size

601 Views Asked by At

I have a calibrated camera. In other words, I calculated the extrinsic and intrinsic parameters as below using opencv (more info) (:

camera projection formula

The equation above maps real world coordination (X, Y, and Z) to image coordination (u and v).

Imagine the camera and the object as scene below:

enter image description here

where P1 and P2 are center of camera (calculated from camera calibration) and center of the object, respectively. d is distance between P1 and P2. I can calculate d (using size of the object and focal length in a taken image, even in rotation). w, h, and l are width, height and length of the object, respectively and I know the value of them. l is very small and I show it for better illustration of 3d world (you can imagine the object as metal sheet).

As I read, by knowing depth information we can go from 2D space to 3D space. My question is that how can I calculate object coordination (determining P2) using aforementioned parameters in an image taken by the camera?

In other words, By supposing d as depth information and equation above (knowing u, v) how can I calculate coordination of P2?