Difference between viewer and camera in a 3d projection

124 Views Asked by At

I have been programming a 3D graphics library for myself and I have used the following wikipedia page to help me.

https://en.wikipedia.org/wiki/3D_projection#Perspective_projection

The article references both a camera position and a viewers position. I assume after finishing my implementation that the viewer has something to do with the field of view but it makes no effort to explain how. It simply states at the beginning "The camera's position, orientation, and field of view control the behavior of the projection transformation." The article describes the use of the camera's position and orientation but never clarifies where field of view comes into play.

It later uses the coordinates of a viewer in the final projection, but it is unclear to me what these values mean.

So then my question is: what is the difference between a viewer and a camera in a 3d projection of an image to a 2d plane? And how do I use this knowledge to manipulate the field of view?

1

There are 1 best solutions below

2
On

It should be roughly like this:

  • The camera projects the 3D world on its 2D display.
  • The viewer (observer) takes a look at that display.
  • The field of view field is a property of the camera, describing what part of a sphere around the camera is visible to it