Understanding from where a recurrence of polynomials comes from

290 Views Asked by At

In chapter $7$ of Principles of Mathematical Analysis ($3$rd edition) by Walter Rudin, the exercise $23$ says:

Put $P_0=0$, and define, for $n=0,1,2,\dots,$ $$P_{n+1}(x)=P_{n}(x)+\frac{x^2-P_{n}^2(x)}{2}.$$ Prove that $$\lim_{n\to \infty}P_{n}(x)=|x|,$$ uniformly on $[-1,1]$.

I am interested here in the origin of the polynomials (i.e. how Rudin came up with them) rather than a solution to the exercise. My intuition is that the polynomials come as some sort of approximation of $\sqrt{x^2}$, but I am unsure how to use an approximation of the square root function to derive the recurrence $P_0(x) = 0$ and $$P_{n+1}(x) = P_n(x) + \frac {x^2 - P_n^2(x)} 2$$ as an approximation of $\sqrt{x^2}$.

1

There are 1 best solutions below

4
On BEST ANSWER

At the risk of merely labouring the obvious:

Imagine that we wanted a method for approximating $\sqrt{a}$ for any $a \geqslant 0$ with an increasing sequence $(b_n)_{n \geqslant 0}$ whose limit is $\sqrt{a}$ and which is defined by $$ b_0 = 0, \ \, b_{n+1} = f(b_n) \ \, (n = 0, 1, 2, \ldots) $$ for some "simple" function $f$ defined on $[0, \sqrt{a}].$

About the "simplest" function we might think of that might work is $$ f(x) = x + \frac{a - x^2}{2}. $$

In order to suit our purpose, however, $f$ must satisfy the condition $f(x) \leqslant \sqrt{a}$ for all $x \in [0, \sqrt{a}].$

Adapting the hint given in Rudin's question, we have $$ \sqrt{a} - f(x) = (\sqrt{a} - x)\left(1 - \frac{\sqrt{a} + x}{2}\right) $$ so $f$ will be usable iff $(\sqrt{a} + x)/2 \leqslant 1$ for all $x \in [0, \sqrt{a}].$ This is equivalent to $\sqrt{a} \leqslant 1,$ that is, to $a \leqslant 1.$

If we're stubborn, we can use our method to approximate $\sqrt{a}$ for all $a \geqslant 0,$ by finding an integer $N$ such that $N^2 \geqslant a,$ and calculating: $$ \sqrt{a} = N\sqrt{\frac{a}{N^2}}. $$

Thus we can make do, at a pinch, with this crackpot method that only works for $a \in [0, 1].$

Starting with $b_0 = 0,$ it is clear, much as in Rudin's hint, that for all $n$ we have $0 \leqslant b_n \leqslant b_{n+1} \leqslant \sqrt{a}$ and: $$ \sqrt{a} - b_n \leqslant \sqrt{a}\left(1 - \frac{\sqrt{a}}{2}\right)^n. $$ So, indeed, $$ \lim_{n \to \infty}b_n = \sqrt{a}. $$

Convergence is stupidly slow, so it was never a practical scheme for numerical computation. It suits Rudin's theoretical purpose, though, because he can use the inequality $$ x \left(1 - \frac{x}2\right)^n < \frac2{n+1} \quad (0 \leqslant x \leqslant 1; \ n = 0, 1, 2, \ldots) $$ (I proved this by differentiating, but perhaps Rudin has a neat trick in mind?) to infer that the convergence of the sequence $(b_n)$ to the limit $\sqrt{a}$ is uniform for all $a \in [0, 1].$

Clearly, $b_n$ is a polynomial in $a$ (of degree $2^{n-1},$ for $n \geqslant 1$).

Writing $a = x^2,$ we thus have a sequence of polynomials in $x$ that converges uniformly to $\left\lvert x \right\rvert$ for $x \in [-1, 1].$

I'll get me coat!