I am reading Süli and Mayers' book "An Introduction to Numerical Analysis" and I think I found an error (page 234, second paragraph for reference). I couldn't find it back in the errata, however I still think it's a mistake, and I am now wondering what it should have said instead.
They were proving that the minimax polynomial of degree $n$ on interval $[a,b]$ must be of the form of a polynomial $r$ for which the error $\|f(x)-r(x)\|_{\infty}$ is equal to $|f(x_{i})-r(x_{i})|$ at $n+1$ points $x_{0},\dots,x_{n}\in [a,b]$ where $f(x_{i})-r(x_{i})=-[f(x_{i+1})-r(x_{i+1})]$ for all $0\leq i \leq n$.
Now my question concerns the proof of the existence of first point (which is most left) $x_{0}$. Using that $|f(x)-r(x)|$ is continuous on $[a,b]$, they show that $|f(x)-r(x)|$ takes it maximum $L$ on $[a,b]$. They let $$x_{0}=\min\{x\in[a,b] \mid |f(x)-r(x)|=L\}.$$
Then, they say, and I quote: "Now, $x_{0}=b$ would imply that $|f(x)−r(x)| = L$ for all $x ∈ [a, b]$."
I imagine it's clear where the mistake is. They use this to prove that $x_{0}<b$, so I'm assuming $x_{0}$ is ill-defined. What do you think should there be instead? I was thinking something similar to $$x_{0}=\min\{x\in[a,b] \mid |f(y)-r(y)|=L \forall y \in [a,x] \},$$ but I believe this is still not sufficient. Any help is greatly appreciated.
No, the definition of $x_0$ as cited is correct, from all the points where the error is maximal you take the one most left (in standard orientation).
Now assuming that $x_0=b$, then all other local extrema of the error have a smaller absolute value. Thus you can shift the approximation by a small constant $\delta$ so that the maximal error becomes smaller $$ |f(b)-r(b)-\delta|=L-|\delta| $$ and $$ |f(x)-r(x)-\delta|\le|f(x)-r(x)|+|\delta|<L-|\delta| $$ at all other local extrema. That contradicts the assumption that $r$ was chosen as the optimal approximation of its class.