Continued fractions for $\sqrt{x} $ and beyond, valid formula?

582 Views Asked by At

For $x > 0$, is this trick valid?

I use $$ ( \sqrt{x}-1)(\sqrt{x}+1)=x-1 $$

then $$ \sqrt{x}+1 = \frac{x-1}{\sqrt{x}+1-2} $$

so I can use iterations to get the rational approximant

$$ \sqrt{x} \approx \frac{a(0)+a(1)x+...a(n)x^{n}}{b(0)+b(1)x+..+b(m)x^{m}} $$

By repeating this procedure I can use this method to get a rational approximant to $ x^{2^{-n}} $ for integer $n$ valid for positive $x$.

3

There are 3 best solutions below

3
On BEST ANSWER

Background stuff you can ignore this if you already know it.

Recall that simple continued fractions are like this $$ a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \cfrac{1}{a_3 + \cfrac{1}{a_4 + \ddots}}}} $$ with $a_i$ positive, and these always converge and always have a unique value assigned to them. The reason for that is we have $a_i \ge 1$. You will also be familiar with the fact we can truncate a continued fraction to get a good approximation. In the following if I set $\alpha$ to any number from 0 to $\infty$ then $$ a_0 + \cfrac{1}{a_1 + \cfrac{1}{a_2 + \alpha}}$$ still approximates very well.

Call $\alpha$ the seed, if we pick $\alpha$ correctly we get the exact value rather than an approximation.


Any finite number of iterations of $$\sqrt{x}+1 = \cfrac{x-1}{-2+(\sqrt{x}+1)}$$ are valid, but the problem is when you go infinite or equivalently throw away the seed (defined earlier). The reason for that is that the continued fraction $$\sqrt{x}+1 = \cfrac{x-1}{-2+\cfrac{x-1}{-2+\cfrac{x-1}{-2+\cfrac{x-1}{-2+\ddots}}}}$$ is not simple so we don't know a-priori that it's going to converge or have a uniquely assigned value.

We can see a lot of trouble with that continued fraction though, it has it's "$a_i$" negative, instead of numerator $1$ it has $x-1$ which dominates $2$ in absolute value if $x < -1$ or $x > 3$. The algebraic manipulation that produced it doesn't have any say on which branch of the square root we pick: So applying it to different seed values will give different results - we don't have that nice gravity towards a unique value like in the simple case. And any trunction of the continued fraction will for $x < 0$ will have imaginary LHS but real RHS so we expect it to diverge (indeed look at what happens when you take successive convergents for $x=-3$, the values seem to jump around randomly).

The question then would be how to determinate seed values which lie in a stable neighborhood so that we can assure will converge to the correct square root we are interested in (it's not even clear that using the value we want as the seed will be numerically stable) - or to throw it away.


Here is a numerical example to show the sensitivity to initial conditions

? 6/(-2+6/(-2+6/(-2+6/(...(-2+6/(-2+(sqrt(7)+1-0.1)))...))))
% = -1.6457173944347541959729764099899018936
? 6/(-2+6/(-2+6/(-2+6/(...(-2+6/(-2+(sqrt(7)+1+0.0)))...))))
% = 3.6457513110645905905016157536392362230
? 6/(-2+6/(-2+6/(-2+6/(...(-2+6/(-2+(sqrt(7)+1+0.1)))...))))
% = -1.6457865347755941001945839627170441275

7
On

Convergence would be easier to prove if the other factor had been chosen as denominator:

$$ \sqrt{x} - 1 = \frac{x-1}{\sqrt{x} + 1} = \frac{x-1}{2 + (\sqrt{x} - 1)} $$

which upon repeated substitution gives effectively:

$$ \sqrt{x} = 1 + \cfrac{x-1}{2 + \cfrac{x-1}{2 + \cfrac{x-1}{2 + \cfrac{x-1}{2 + \ddots}}}} $$

If we assume $x \gt 1$ ($x = 1$ is trivial), then this continued fraction converges (and actually the finite truncations give alternating upper and lower bounds).

In this case the convergents of the continued fraction correspond to fixed-point iterates:

$$ y_0 = 1 $$

$$ y_{k+1} = f(y_k) = 1 + \frac{x-1}{1+y_k} $$

Here $y_0 = 1$ is a "seed" as referred to in user58512's Answer, and we want to show that the sequence $\{y_k\}$ converges to $\sqrt{x}$, assuming $x \gt 1$. [We mentioned already the triviality of case $x=1$, and as a sidenote the cases $0 \lt x \lt 1$ are covered by the Śleszyński–Pringsheim theorem.]

The proof of convergence mixes a bit of global and local analysis of the iteration. First a global note, that the mapping $f$ sends $y_0 = 1$ to a positive value, and thereafter sends positive $y_k$ to positive $y_{k+1}$. It's easy to ask ourselves what fixed points $y = f(y)$ are possible, and the answer on $\mathbb{R}^+$ is that only $y = \sqrt{x}$ is.

Next a piece of local analysis. Consider the "errors" $\epsilon_k = y_k - \sqrt{x}$. Since $y_k = \sqrt{x} + \epsilon_k$, these must satisfy:

$$ \epsilon_{k+1} = 1 + \frac{x-1}{1+\sqrt{x}+\epsilon_k} - \sqrt{x} = - \frac{\epsilon_k (\sqrt{x} - 1)}{\sqrt{x} + 1 + \epsilon_k} $$

This doesn't quite get us to the contraction mapping conclusion, but it does two good things. Because $\epsilon_k = y_k - \sqrt{x}$ and $y_k$ is positive, the above surely demonstrates our early claim that the convergents alternate between upper and lower bounds on $\sqrt{x}$, i.e. that the $\epsilon_k$ alternate in sign (as long as nonzero). Also we have that once $\epsilon_k \gt -2$, the errors really will begin contracting. However at the beginning $\epsilon_0 = 1-\sqrt{x}$, so we need to proceed with further analysis.

The fact that the convergents are switching back and forth across $\sqrt{x}$ suggest we want to look at taking two steps, mapping $\epsilon_k$ to $\epsilon_{k+2}$. After some algebra we find:

$$ \epsilon_{k+2} = \frac{\epsilon_k (\sqrt{x} - 1)^2}{(\sqrt{x} + 1)^2 + 2\epsilon_k} $$

and this is enough to give us a strict contraction by half every two steps when $\epsilon_k \ge -\sqrt{x}$ (as of course it is). QED

0
On

regarding @Hardmaths answer here is the same numerical situation I pointed out before, except this continued fraction picks out positive roots rather than negative ones:

? f(x,a) = 1 + (x-1)/(2+(x-1)/(2+(x-1)/(2+(x-1)/(2+(x-1)/(2+(x-1)/(2+(x-1)/(2+(x-1)/(2+(x-1)/(2+(x-1)/(2+(x-1)/(2+(x-1)/(2+(x-1)/(2+(x-1)/(2+(x-1)/(2+(x-1)/(2+a))))))))))))))))
? f(4,-3.0)
% = -2.0000000000000000000000000000000000000
? f(4,-3.0-0.01)
% = 2.0000372621864075721223091776486352376
? f(4,-3.0+0.01)
% = 1.9999629243489459922057503498491885171

Setting aside the original focus on continued fractions, if our goal were simply finding rational approximants of $\sqrt{x}$, then a Newton iteration would give faster convergence:

$$ y \mapsto (y + x/y)/2 $$

say with starting value $y = 1$ for the positive root (or $y = -1$ for the negative root). The 16 iterations used in the continued fraction above would, with comparable operations, give approximately $2^{15}$ digits correct for $\sqrt{4}$.