I'm having trouble going from spherical coordinates (as an approximation of the surface of the Earth) \begin{align} x&=r\sin\left(\phi\right)\cos\left(\theta\right),\\ y&=r\sin\left(\phi\right)\sin\left(\theta\right),\\ z&=r\cos\left(\phi\right), \end{align} which seem to parameterize from center of a sphere (via $r$), to the parameterization of the sphere with respect to an arbitrary point in space above the surface of the Earth.
The reason for this question is that I'm writing a program to estimate the surface area on the Earth within the field of view of a camera flying on a drone at some altitude $R_0-r$ (with $r$ being sea level) above the Earth's surface.
So basically, what I'm trying to do is parameterize a sphere with respect to angle within the field of view of the camera, such that integrating over the field of view of the camera would result in that area on the ground. This is, in a way, the parameterization of the sphere with respect to camera FOV (I think).
If the camera is far enough away from the Earth's surface such that the whole Earth resides in the FO, then we should expect, crazily enough, $\infty$ area (which doesn't make sense, but it does in the context of this problem).
To paint a better picture, imagine putting a camera at some arbitrary altitude, say 100 meters above the Earth's surface. Pointing the camera straight down simplifies the problem greatly because the area below that vantage is roughly rectangular. However, the scale of individual pixels varies over that FOV, especially in the corners of the frame (even if we approximate the surface of the Earth as a plane in that case). If we pitch the camera at some arbitrary angle $\psi$ above nadir, then there will be some pixels in the frame that are effected a lot, and some that may still be pointed straight down. For pixels that represent area very close to the horizon, their values $\rightarrow\infty$.
Again, what I would like to do is parameterize the Earth's surface with respect to the camera's field of view so that integrating over the camera's field of view results in the area on the ground.
How can I set this up?
If this question isn't totally clear, please let me know.
$$x = -r\sin(\phi)\cos(\theta) + R_0$$
You'll want to cut off $\theta$ so you don't count backfaces; it should vary from $0$ to $\arccos\left(\frac{r}{R_0}\right)$.