An ellipse is given in parametric form as follows
$P(t) = C + E_1 \cos(t) + E_2 \sin(t) $
where $C, E_1, E_2 $ are $2-$ or $3-$ dimensional vectors, and $ t \in [0, 2 \pi)$. I would like to find the points on the ellipse that are nearest and farthest from a straight line given in parametric form as
$L(t) = Q_0 + t V $
with $ t \in \mathbb{R} $, and $Q_0, V$ are $2-$ or $3-$ dimensional vectors.
This problem is an exercise in constrained optimization, and that is its context.
The way I would probably approach this normally is
As a constrained optimization problem
Since you are expected to solve this as constrained optimization, there are two other ways you could approach it: 1) Direct optimization over the parametric forms. 2) Optimization of the distance to the line of a point whose total distance to the foci of the ellipse is constrained.I will also go over (1) briefly. To find the minimal distance, we seek to find $$\min_\limits{t}\min_\limits{u}||P(u)-L(t)||,$$ and to find the maximal distance, we seek to find $$\min_\limits{t}\max_\limits{u}||P(u)-L(u)||.$$ The outer optimization is min in both cases because the distance from a line to a point is defined as the minimal distance from a point on the line to the point. Once again, the easiest way to solve this is by changing coordinates and then exploiting properties of the ellipse, but we will go over how to do it from the definition. It is pretty much impossible to do this without rotating though, because look at the equations:
To work from the definition:
How can we solve the last part? We can solve in numerically since $u$ is restricted to one period of sine/cosine, but solving it explicitly is quite a mess, and since $t$ is a free variable in the inner optimization, we can't solve this numerically as a one dimensional optimization problem. We could solve numerically in $u$ and $t$ at the same time however.
Actually Solving
This is why rotating the problem is so important. If we rotate the parametric equations, then we end up minimizing the sum of the square distances in $x'$ and $y'$ instead of in $x$ and $y$.
We use the same formulas for $x'$ and $y'$ as in the first approach, but to be explicit, $m=\frac{V_y}{V_x}$. Components like $E_{1,x'}$ can be found by projecting $E_1$ onto the unit vector in the $x'$ direction, in other words, taking the dot product.
Then, for any $u$, there will be some $t$ such that $L(t)$ and $P(u)$ have the same $x'$ coordinate, so the outer optimization will always be able to set the $x'$ distance to 0. Additionally, the $y'$ distance is independent of $t$, so we are now free to minimize only the $y'$ distance in the inner optimization.
This is fantastic because it reduces it to $C_{y'}-Q_{0,y'}+E_{1,y'}\cos(u)+E_{2,y'}\sin(u)$ ($V_{y'}$ is 0 by definition).
This is easy to solve using the identity $a\cos(x)+b\sin(x) = c\cos(x+\phi)$ where $c=sgn(a)\sqrt{a^2+b^2}$ and $\phi=\arctan{-\frac{b}{a}}$ (this is pretty easy to prove from the fact that a linear combination of sine and cosine with the same period is just a scaled and shifted cosine with the same period). For us, only the amplitude matters because it gives us the extremal $y'$ distances as $$C_{y'}-Q_{0,y'} \pm \sqrt{E_{1,y'}^2+E_{2,y'}^2}$$