Convergence to $\sqrt{2}$

1.3k Views Asked by At

It is a very good way to approximate $\sqrt{2}$ using the following;

Let $D_{k}$ and $N_{k}$ be the denominator and the numerator of the $k$th term, respectively.

Let $D_1=2$ and $N_1=3$, and for $k\geq2$, $D_k=D_{k-1}+N_{k-1}$, and $N_{k}=D_{k-1}+D_{k}$.

To clarify, here is the sequence;

$\frac{3}{2},\frac{2+2+3}{2+3},\frac{2+3+2+3+2+2+3}{2+3+2+2+3},\cdots$ that is $\frac{3}{2},\frac{7}{5},\frac{17}{12},\cdots$

  • Why the sequence converges to $\sqrt{2}$ even the initial numerator and denominator were other positive integers?

  • How to find the $j$th term (i.e. $\frac{N_{j}}{D_{j}}$) without finding the preceding terms, say the $45$th term?

  • Is there a similar way to approximate the square root of any other positive integer, say $\sqrt{3}$?

I do know many ways to approximate square roots, such as Newton's method. But I am asking about a similar way.

4

There are 4 best solutions below

0
On

We have $$x_k=\frac{N_k}{D_k}=\frac{2D_{k-1}+N_{k-1}}{D_{k-1}+N_{k-1}}=\frac{2\frac{D_{k-1}}{N_{k-1}}+1}{\frac{D_{k-1}}{N_{k-1}}+1} =\frac{2x_{k-1}+1}{x_{k-1}+1}=2-\frac1{x_{k-1}+1}$$ If the sequence $x_k$ converges, its limit obeys $$ x=2-\frac1{x+1}.$$

0
On

You have a pair of linear recurrence relations. You can write it as $$\begin {pmatrix} N_k\\D_k \end {pmatrix}=\begin {pmatrix} 1&2\\1&1 \end {pmatrix}\begin {pmatrix} N_{k-1}\\D_{k-1} \end {pmatrix}$$ You find the eigenvalues and eigenvectors of the matrix. The eigenvector $\begin {pmatrix} \sqrt 2\\1 \end {pmatrix}$corresponding to the eigenvalue $\sqrt 2 +1$ will dominate, so you will converge to $\sqrt 2$ for any starting condition that does not have this eigenvector with a $0$ coefficient.

1
On

This approximation seems analogous to continued fraction definition of $\sqrt{2}$ - specifically, each member of the sequence is a sort of "partial sum" of the infinite continued fraction for $\sqrt{2}$. That infinite fraction is given by

$$\sqrt{2} = 1 + \frac{1}{2 + \frac{1}{2 + \frac{1}{2 + \frac{1}{2 + \frac{1}{2 + ...}}}}}$$

and the "partial sum" is taken, in the context of continued fractions, by truncating everything after a plus sign. (I know "partial sum" isn't the right word, I just can't think of any other analogous terms.) Then after the truncation, you just simplify what is now a finite fraction, and boom, you've got a member of your sequence.

The construction of continued fractions may be helpful in figuring out why this converges to $\sqrt{2}$. Mathologer has a nice video on the subject here: https://www.youtube.com/watch?v=CaasbfdJdJg


Given the simplicity of both the continued fraction and the pattern, finding the $n$-th term of your sequence should be trivial. Try a few small example truncations (at the $n$-th plus sign, letting the one outside the fraction - the "first" one - be $n = 0$) and try to find a pattern in them. I will tell you right now that the sequence is given recursively by

$$\frac{N_n}{D_n} = \frac{N_{n-1} + 2 \cdot D_{n-1}}{N_{n-1} + D_{n-1}}$$

and initial value $\frac{N_0}{D_0} = 1$

This is just a different recursion relation from the one you gave. I don't really know how to turn recursions into explicit definitions but it looks like others have already solved that matter.


Finally, on the matter of whether such a method exists for all other roots of integers - it does! Continued fractions can be made for all real numbers, and of course likewise sequences created. (Not all of them will have nice patterns though.) Some further reading on the matter:

https://en.wikipedia.org/wiki/Continued_fraction

Of course something of note is that not all continued fractions are infinite. Specifically, if a number is rational, it has a finite continued fraction. Ergo, most square roots of integers will produce infinite continued fractions - infinite sequences in your case - but the square roots of perfect squares ($1, 4, 9, 25, 36 ...$) or ratios of perfect squares ($1/4, 25/36, 81/100, 121/9 ...$) will have finite continued fractions - finite, not infinite sequences.

0
On

Suppose $2d^2=n^2\pm 1$ then $$2(n+d)^2=2n^2+4nd+2d^2=(n+2d)^2+n^2-2d^2=(n+2d)^2\mp 1$$

Put $D=n+d$ and $N=n+2d$. The first equation gives $$2=\left(\frac nd\right)^2\pm \frac 1{d^2}$$ and the second gives $$2=\left(\frac ND\right)^2\mp \frac 1{D^2}$$ and you will see that the error changes sign and reduces in magnitude. So that is a simple way you can see that this converges. In fact if you substitute and error term $e$ for $1$ you still get convergence when you start with estimates which are not so good.


Suppose we wanted to do the same for $\sqrt n$, how would we start. Well a good place is (a version of) Pell's equation $$x^2-ny^2=\pm 1$$ which has known solutions using continued fractions.

Suppose we have two solutions to this equation $(x,y)=(a,b)$ and $(x,y)=(p,q)$ (which may be the same)

We have then that $(a+\sqrt n b)(a-\sqrt n b)=\pm 1$ and $(p+\sqrt n q)(p-\sqrt n q)=\pm 1$ (though the signs need not be the same). This gives us $$\pm 1=(a+\sqrt n b)(a-\sqrt n b)(p+\sqrt n q)(p-\sqrt n q)=(a+\sqrt n b)(p+\sqrt n q)(a-\sqrt n b)(p-\sqrt n q)=$$$$=[(ap+nbq)+(aq+bp)\sqrt n][(ap+nbq)-(aq+bp)\sqrt n]=(ap+nbq)^2-n(aq+bp)^2$$ and we find that $(x,y)=(ap+nbq, aq+pb)$ is a further solution. So once we have one non-trivial solution, we find we can generate infinitely many.