Determine scale factor of an object given its distance from a viewer/camera

4.7k Views Asked by At

I have a 3D object which is in its simplest form consisting of an origin in 3D space and a set of vertices that are all local to this origin.

I then transform this 3D origin into 2D camera coordinates using a perspective transform, but I need some way to also transform the local vertices of this object which has the effect of moving this 3D object from 3D world coordinates into 3D camera coordinates.

How can I determine the scale of an object based on it's distance from the viewer/camera?

I hope this makes sense, the title is probably the most concise explanation I can give. Any help would be much appreciated.

1

There are 1 best solutions below

0
On

This is an interesting optics/physics/math problem. I believe that your question could be asking for a number of distinct answers:

  1. Scale in image space is determined according to the magnification of the camera: http://en.wikipedia.org/wiki/Magnification#Calculating_the_magnification_of_optical_systems

  2. Calculate the distance from the 'lens' to the image using the distance from the object to the image, information about the camera, and lens equations - note that complicated optical systems can be represented by a single lens with a defined focal length. (Also, note that the focal length may vary as the camera is focused.) This is most easily accomplished for the pinhole camera, for which the distance from the 'lens' (the pinhole) to the image is defined entirely by the placement of a surface behind the pinhole. More complicated cameras can be translated into a pinhole camera for the purposes of performing a raytrace. To calculate the 'scale' of features, project each of the vertices through the pinhole and calculate their intersections with the image surface - when using a simplified representation of a camera, this will be a plane a defined distance beyond the pinhole, but a rotated image can be calculated by placing the plane between the object and the pinhole.

  3. Finally, it is possible to image each of the vertices through an optical system individually, taking into account the depth of the scene to account for depth of the image, but typically image sensors are planar and this type of analysis is only useful for depth-of-field type calculations. I recommend #1 or #2 unless there is a compelling reason to do this.