Given a (non-convex) Polygon $P$ and a straight line segment $S$ in $\mathbb{R}^2$, where $S$ and $P$ are disjoint, I am looking for an efficient algorithm to find a point $s$ on $S$ which has the maximum distance from $P$ among all points on $S$ (and the actual distance, which is then trivial, of course).
To be clear, the distance $dist(s,P)$ between one point $s$ and $P$ is the minimum distance between $s$ and all points in $P$ (which I can calculate by a standard point-to-polygon algorithm). So another way to write this formally is:
"Determine $max(min(|p-s| : p \in P) : s \in S)$ (and the related points)".
Note the point of maximum distance can be somewhere in the middle of $S$:
I have checked some standard resources (including this site) and books from computational geometry, but had no luck so far. If $S$ is described as a parametric curve $c(t)$ where $t \in [0,1]$, this is the problem of finding the global maximum of $dist(c(t),P)$ in $[0,1]$, so it would even help me if someone has an idea how to split $[0,1]$ into subintervals where each subinterval contains only a local maximum. That would allow me to apply a standard numerical maximum search algorithm to each of the subintervals.
Any ideas?


After thinking a while about the problem, it seems I found an answer for my own question. For S having endpoints $s_1$ and $s_2$, let
$f(t) := dist(c(t),P)$ for $t \in [0,1]$, where $c(t) = s_1 + (s_2-s_1) \times t, t \in[0,1]$
As written in the question, the task is to find the global maximum of $f$. Furthermore, let $E_1, ... , E_n$ be the edges of $P$, and $f_i(t):=dist(c(t),E_i)$ the distance function which correspond to the edge $E_i$ . Since $S$ and $P$ are disjoint, f can be written as
$f(t) = \min\{f_i(t) :i=1...n\}$
Since $E_i$ and $S$ are line segments, each $f_i$ is either monotone or unimodal, so except for the edge case where $E_i$ and $S$ are parallel, it has exactly one local minimum $t_{m,i} \in [0,1]$. This minimum is easy to calculate, since it will always be reached at one of the end points of either $E_i$ or $S$. (For the parallel case, $f_i$ will reach its minimum on an interval $[t_{m1,i}, t_{m2,i}] \subset [0,1]$, and one can replace $t_{m,i}$ by the two points $t_{m1,i}, t_{m2,i}$ in the following.)
Now, if $f$ has a local minimum in $[0,1]$, then it must be also a local minimum of $f_i$ for at least one $i\in\{1...n\}$. So the set $M:=\{t_{m,i} | i=1,...,n\}$ is a superset of all local minima of $f$. And since each local maximum of a continuous function on $\mathbb{R}$ is separated from the previous and the next local maximum by a local minimum betweem them, one can sort $M$ in ascending order and use the result to split up the interval $[0,1]$ into subintervals, where each one contains one local maximum of $f$ at most. These subintervals then can be searched for the value $t_{M,i}$ which maximizes $f$ by a Golden-section search. The final result then is the $t_{M,i}$ which maximizes $f(t_{M,i})$.
Since the numerical search will still involve lots calculations for determining $dist(p,P)$ for several points $p \in S$, a further speed-up can be achieved by presorting the edges $E_1, ... , E_n$ by their distance from $S$. So when $d_k=min(dist(p,E_1)$, $dist(p,E_2)$, ..., $dist(p,E_k))$ is calculated already for one $k$, and $dist(S,E_{k+1})$ is bigger than $d_k$, the minimum distance calculation of $dist(p,P)$ can be stopped, since the further edge distances won't change $d_k$ any more.