When does $\sqrt{x+\sqrt{x+1+\sqrt{x+2+...}}}=0$?

854 Views Asked by At

Consider the function $f$ defined as the limit of the functions $$f_0(x)=\sqrt{x}$$ $$f_1(x)=\sqrt{x+\sqrt{x+1}}$$ $$f_2(x)=\sqrt{x+\sqrt{x+1+\sqrt{x+2}}}$$ $$...$$ so that $f(x)$ is defined iff $f_n(x)$ is defined for some $n$. The unique root $x_0$ of the function $f$ satisfies $f(x_0)=0$, and it can be alternatively expressed as the limit of the roots of $f_0, f_1, f_2, ...$. See the graphs below:

Graphs

Can anyone find an expression equal to this limit? I realize that the chances of something nice and closed-form are slim - can we find a series, integral, or even nested radical representation of the real root of $f(x)$?

7

There are 7 best solutions below

1
On

To address how I performed the computation, this was done in Mathematica using the following code:

F[x_, n_] := Fold[Sqrt[x + #1 + #2] &, 0, Reverse[Range[n + 1] - 1]]
FindRoot[F[x, 500] == 0, {x, -1.2}, WorkingPrecision -> 50]

The first command defines $f_n(x)$; the second chooses $n = 500$, which due to the extremely rapid convergence of $\{f_n\}_{n \ge 1}$, is more than sufficient to converge to the desired precision with a short computation time. You can check that the result is accurate by choosing $n = 100$ and seeing that the result is unchanged to $50$ digits of precision; indeed, even to $100$ digits of precision. I would put an upper bound on the error to be less than $10^{-n}$ when using $f_n$ instead of $f$.

0
On

Since there is apparently no representation of the root of $f(x)$ as a series, integral, or nested radical, this answer will explain how to find the root by numerically solving $f(x)=0.$ Of course, if you have Mathematica, you could use the Mathematica code in heropup's answer. But the Mathematica code doesn't explain exactly how the root is computed. Since the derivative of $f(x)$ is infinite at the root $x=r$ and $f$ is undefined for $x\lt r,$ neither Newton's method nor the secant method work. But the bisection method will work if you make a slight change in the definition of $f(x)$: if any argument of a square root during the computation of $f(c)$ is negative, define the value of $f(c)$ as negative one.

The function $f(x)$ can be generalized by replacing the sequence $1, 2, 3, \dots$ with the more general arithmetic sequence $C, 2C, 3C, \dots$ (where $C$ is some non-negative real number). The following C code computes the root of $f_N(x,C)$ in quad precision arithmetic. It computes the roots of $f_{50}(x,1)$, $f_{50}(x,2)$, and $f_{50}(x,3)$ as $-1.21103728351247151496696981615$, $-1.62419058855232395793631994913$, and $-1.94153569702169327219376232801$.

// Compute f(x,N,C).  
//
// f(x,0,C) = sqrt(x),
// f(x,1,C) = sqrt(x + sqrt(x+C)),
// f(x,2,C) = sqrt(x + sqrt(x+C + sqrt(x+2C))), ...
//
// Return -1 if any argument of sqrt is negative.
//
__float128 f(__float128 x, int N, __float128 C)
{
  __float128 fval=0;
  for (int i=0; i<=N; i++) {
    fval = x + C*(N-i) + fval;
    if (fval < 0) return -1;
    fval = sqrtq(fval);
  }

  return fval;
}

int main(int argc, char **argv)
{
  char f128_s[256];

  if (argc < 3) {
    printf("Usage: (N) (C)\n");
    return 1;
  }
  int N = atoi(argv[1]);
  __float128 C = strtoflt128(argv[2], NULL);
  if (N < 2) {
    printf("N must be at least 2\n");
    return 1;
  }

  // Solve f(x,N,C) = 0 for x using bisection method.  Exit bisection
  // method loop when maximum number of iterations is exceeded or
  // length of interval containing root is smaller than specified
  // tolerance.
  __float128 x0=-100, x1=0;
  int maxit=150, numit=1; 
  __float128 min_interval_len=2e-32, root;
  while ( 1 ) {
    if (numit >= maxit) break;

    root = (x0 + x1)*0.5;
    if (x1-x0 < min_interval_len) break;

    __float128 fval = f(root,N,C);

    if (fval > 0) x1 = root;
    else x0 = root;

    numit++;
  }

  quadmath_snprintf(f128_s, 256, "%.30Qg", root);
  printf("root=%s, numit=%d\n", f128_s, numit);
}
1
On

Not sure if this is helpful, but it may be fruitful to consider a sequence of functions $g_n(x)$ where $g_0(x) = x$ and

$$g_n(x) = [g_{n-1}(x)]^2 - x - n$$

In particular,

$$\lim_{n \to \infty} g_n(x_0) = 0.$$

I found this by setting

$$f_n(x) = 0$$

and moving all roots to the other side. For example, with $n = 2$,

$$\begin{align*} 0 &= \sqrt{x + \sqrt{x + 1 + \sqrt{x + 2 + \sqrt{x+3}}}}, \\ -x &= \sqrt{x + 1 + \sqrt{x + 2 + \sqrt{x+3}}}, \\ x^2 - x - 1 &= \sqrt{x+2 + \sqrt{x+3}}, \\ (x^2 - x - 1)^2 - x - 2 &= \sqrt{x+3}, \\ ((x^2 - x - 1)^2 - x - 2)^2 - x - 3 &= 0. \end{align*}$$

This makes it clear that if $a$ satisfies $f_n(a) = 0$, then $g_n(a) = 0$. I suppose then that the solution to the question is dependent on if the sequence $g_n$ defined by $g_0 = x$ and

$$g_n = g_{n-1}^2 - x - n$$

has an explicit form.

0
On

This problem is equivalent to proving that there exists some real number $r$ such that \begin{equation} \lim_{n \to \infty} f_n(r) = 0 \end{equation} where $f_n = g_n^0(x)$ and $g^k_n$ is defined by the recurrence \begin{equation} g_n^k(x) = \begin{cases} \sqrt {x+k} & \text{if }n=1 \\ \sqrt{x + k + g^{k+1}_{n-1}(x)} & \text{o.w.} \\ \end{cases}. \end{equation} Therefore, by definition, we have \begin{equation} f_n(x) = \begin{cases} \sqrt {x} & \text{if }n=1 \\ \sqrt{x + g^{1}_{n-1}(x)} & \text{o.w.} \\ \end{cases}, \end{equation} which after proving the following: \begin{equation} g_n^1(x) = g_n^0(x+1) , \end{equation} by a simple induction, gives us that \begin{equation} f_n(x) = \begin{cases} \sqrt {x} & \text{if }n=1 \\ \sqrt{x + f_{n-1}(x+1)} & \text{o.w.} \\ \end{cases}. \end{equation}

Let us suppose that $r_n - f_{n-1}(r_n+1) = \epsilon_n$ and that $\lim_{n \to \infty} \epsilon_n = 0 $ then we have that \begin{equation} \lim_{n \to \infty} f_{n}(r_n) = 0. \end{equation}

One possible route to find this is to consider the growth of the sequence \begin{equation} \delta_n(x) = f_n(x+1) - f_n(x). \end{equation}

Therefore given a good approximation to $r_n$, an educated guess for $r_{n+1} $ should be some \begin{equation} r_{n+1} \in [r_{n} - 1, r_{n}], \end{equation} this is because \begin{equation} f_{n+1}^2(r_{n} - \epsilon) = r_{n} - \epsilon + f_n(r_{n} - \epsilon) +\delta_n(r_{n} - \epsilon) \end{equation} and \begin{equation} \delta_n(r_{n} - \epsilon) = f_n(r_{n} +1- \epsilon) - f_n(r_{n} - \epsilon) = f_n(r_{n} +(1-\epsilon)) - f_n(r_{n} - \epsilon) \end{equation} gives us that we should be looking for some \begin{equation} -\delta_n(r_{n} - \epsilon) \approx r_{n} - \epsilon + f_n(r_{n} - \epsilon) \end{equation} which is equivalent to saying \begin{equation} -\delta_n(r_{n} - \epsilon) - f_n(r_{n} - \epsilon) \approx r_{n} - \epsilon \end{equation} or \begin{equation} -f_n(r_{n} + (1- \epsilon)) \approx r_{n} - \epsilon, \end{equation} or \begin{equation} f_n(r_{n} + (1- \epsilon)) \approx \epsilon - r_n . \end{equation}

which is a "sort-of" fixed point equation. Now we are getting somewhere!

We now have a method to recursively approximate solutions from previous solutions; i.e. \begin{equation} f_n(r_{n} + (1- \epsilon)) = \epsilon -r_{n} \implies f_{n+1}(r_{n} - \epsilon) = 0 . \end{equation}

Let us compute $r_n$ for small values to verify the solution.

  1. $f_1(x) = \sqrt x $ is the base case and it has the obvious solution $r_1 = 0$.
  2. Since $f_1(0) = 0$ and $\sqrt{1- \frac{\sqrt5 -1}{2}} = \frac{\sqrt5 -1}{2} $ we have an exact solution in the form of $r_1 = 0 $, $\epsilon = \frac{\sqrt5 -1}{2} $, and $r_2 = r_1 - \epsilon $; i.e. $f_2 (r_1 - \epsilon) = f_2\left(\frac{ 1-\sqrt5 }{2}\right) = 0$.
  3. Let $\epsilon = \frac{1}{2} (3 - \sqrt{5})$ then one can easily check that both $f_2(r_2 +1 - \epsilon) = \epsilon - r_2 $ and $f_3(r_2 - \epsilon) =f_3(-1) = 0 . $
  4. Let $\epsilon = 0.153761 $ then one can easily check that both $f_2(r_3 + 1 - \epsilon) \approx \epsilon - r_3 $ and $f_4(r_3 - \epsilon) = f_3(-1.153761) \approx 0 . $

In conclusion, we have the following recursion relation for the roots

\begin{equation} r_{n+1} = \begin{cases} 0 & \text{if }n=1 \\ r_n - \epsilon & \text{if } f_n(r_n +1 - \epsilon) = r_n - \epsilon \\ \end{cases}; \end{equation}

however, it is possible that a recursive analysis of the following two functions

  1. $f'_n(x)$

  2. $\delta_n(x)$

could give a better recursive bound $b_n < b_{n-1}<1 $ on the intervals

\begin{equation} r_{n+1} \in [r_n-b, r_n] \end{equation}

0
On

First of all, here is a quick experiment in pari/gp (known for rapid run and high precision computations) using a precision of 1000 decimals, to locate the zero point of (the square of) the function defined in the OP:

? \p 1000
? N = 10000.;
? { f(x) = a = x+N+1; for(k=0, N, a = x + N - k + sqrt(a)); a; }
? solve( x=-2, -1, f(x) )
%15 = -1.21103728351247151496696981615341479894622010332776604004744966664016950841178...

and the next (nine hundred or so) decimals were omitted.

Let us take a look now from the mathematical point of view.


Let $h_n:(-2,\infty)$ be the function $$ h_n(x)=x+\sqrt{x+1+\sqrt{x+2+\sqrt{\dots +\sqrt{x+n}}}}\ . $$ Let us show first by induction the double inequality $$ x\le h_n(x)\le 2x\ , \qquad x\ge 3\ , $$ The left inequality is valid for all $x\ge -1$. For the right inequality we have in the case $n=1$ the inequality $x+\sqrt{x+1}\le 2x$, i.e. $x+1\le x^2$, which is true for $x\ge 2$, and inductively, $$ h_{n+1}(x)=x+\sqrt{h_n(x+1)}\le x+\sqrt{2(x+1)}\le 2x\ ,\qquad x\ge 3\ . $$ One can use this to show the existence of the limit $$ h(x)=\lim_{n\to\infty}h_n(x) \ , $$ so $h$ is increasing, as a limit of increasing functions, and we can use the two inequalities $$ \begin{aligned} h(x)&\ge x+\sqrt{x+1+\sqrt{x+2+\sqrt{x+3+\sqrt{x+4+\sqrt{x+5}}}}} \ , \\ h(x)&\le x+\sqrt{x+1+\sqrt{x+2+\sqrt{x+3+\sqrt{x+4+\sqrt{2(x+5)}}}}} \ , \end{aligned} $$ to find a first location of the zero of $h$ between $-2$ and $-1$. On the RHS of the above inequalities we have monotone (increasing) functions, and the corresponding equations $$ \begin{aligned} 0&= x+\sqrt{x+1+\sqrt{x+2+\sqrt{x+3+\sqrt{x+4+\sqrt{x+5}}}}} \ , \\ 0&= x+\sqrt{x+1+\sqrt{x+2+\sqrt{x+3+\sqrt{x+4+\sqrt{2(x+5)}}}}} \ , \end{aligned} $$ can be used to obtain lower and upper bounds for the zero of $h$ in the interval $(-2,-1)$.


The above equations are equivalent to $$ \begin{aligned} (((((x-0)^2-(x+1))^2-(x+2))^2-(x+3))^2-(x+4))^2&-(x+5)&&= 0 \ , \\ (((((x-0)^2-(x+1))^2-(x+2))^2-(x+3))^2-(x+4))^2&-2(x+5)&&= 0 \ . \end{aligned} $$ My choice of weapons is now sage, and the real roots of the above polynomials which live in the interval $(-2,-1)$ are...

P = (((((x-0)^2-(x+1))^2-(x+2))^2-(x+3))^2-(x+4))^2-(x+5)
Q = (((((x-0)^2-(x+1))^2-(x+2))^2-(x+3))^2-(x+4))^2-2*(x+5)

roots_P = [ r for r in P.roots(ring=QQbar, multiplicities=False)
            if -2 < r and r < -1 and r.imag() == 0 ]
roots_Q = [ r for r in Q.roots(ring=QQbar, multiplicities=False)
            if -2 < r and r < -1 and r.imag() == 0 ]

This gives the exact placement of the roots:

sage: roots_P
[-1.208350985854702?, -1.180497738787836?, -1.115402635528049?]
sage: roots_Q
[-1.211810735228575?, -1.161954289026451?, -1.144052873349833?]

So far we have the lower bound -1.211810735228575? for the root of $h$ in $(-2, -1)$, coming from $Q$, but we do not have a "good decision" yet for the upper bound. (Which is so far possibly -1.115402635528049?, but we are not in position to claim the upper bound -1.208350985854702?.)

A word about the shown sage decimals. The string representation for an element in $\bar{\Bbb Q}=$QQbar is exact, sage uses always an exact version of the algebraic element, determined by its minimal polynomial, and a "box" to localize the root in $\Bbb C$.

Let us analyze closer the situation. Let $a,b,c$ be the three roots shown for $P$ in the interval $(-2,-1)$. Then, a closer look shows that in fact we have introduced solutions by squaring, more exactly:

sage: a, b, c = roots_P
sage: a, b, c
(-1.208350985854702?, -1.180497738787836?, -1.115402635528049?)
sage: ( a + sqrt(a+1+ sqrt(a+2 + sqrt(a+3 + sqrt(a+4 + sqrt(a+5))))) ).n()
0.000000000000000
sage: ( b + sqrt(b+1+ sqrt(b+2 + sqrt(b+3 + sqrt(b+4 - sqrt(b+5))))) ).n()
1.08420217248550e-19
sage: ( b + sqrt(b+1+ sqrt(b+2 + sqrt(b+3 + sqrt(b+4 + sqrt(b+5))))) ).n()
0.0438259057173240
sage: ( c + sqrt(c+1+ sqrt(c+2 + sqrt(c+3 - sqrt(c+4 - sqrt(c+5))))) ).n()
1.08420217248550e-19
sage: ( c + sqrt(c+1+ sqrt(c+2 + sqrt(c+3 + sqrt(c+4 + sqrt(c+5))))) ).n()
0.145329668555162

so b respectively c is the root of the "cousin" of P, where the last, respectively the last two signs for the $\pm\sqrt{\ }$ expressions are taken to be minus.

The same can be done using next polynomials of degrees $2^6$, $2^7$, ... but the higher the degree, the more complex the computation and localization of the "good roots".


At this point, we have a proof of the location of the zero of $h$ in the interval between the numbers -1.208350985854702? and -1.211810735228575?, using a rather cheap estimation. The root of $h$ can be described in this fashion as a complicated limit of algebraic numbers, that do not have any structural connection. It is hard to say something more (in a structural way.) A numerical value was obtained in pari/gp.

0
On

Not really a solution, but here's a different different perspective that was too long for a comment:

We have $$ f(x) = \sqrt{x+f(x+1)} $$ or equivalently $$ f(x)^2 =x+f(x+1) $$ which gives us a pseudo-recursion for $f'(x)$: $$ f'(x) = \frac{1+f'(x+1)}{2f(x)} $$ Just formally, we get:\begin{eqnarray} f'(x) = \frac{1}{2f(x)} + \frac1{4f(x)f(x+1)} +\frac1{8f(x)f(x+1)f(x+2)}+\cdots = \sum_{n=0}^\infty \frac1{2^{n+1}}\prod_{k=0}^n\frac1{f(x+k)} \end{eqnarray} Does this series converge? It seems certain that it ought to, and indeed since $f(x)\ge\sqrt{x}$, one gets the upper bound for each summand is $$ \frac1{2^{n+1}}\prod_{k=0}^n\frac1{\sqrt{x+k}} $$ which is clearly summable (and hence we also have uniform convergence on closed intervals when $x>0$, validating that this is indeed the derivative of $f$, not simply formally).

Furthermore, for the $m$th derivative when $m>1$, we have $$ \sum_{j=0}^m\binom{m}{j} f^{(j)}(x)f^{(m-j)}(x) = f^{(m)}(x+1) $$ which can be solved for $f^{(m)}$ by taking $$ f^{(m)}(x) =\frac{\sum_{j=1}^{m-1}\binom{m}{j} f^{(j)}(x) f^{(m-j)}(x) + f^{(m)}(x+1)}{2f(x)} $$ and similarly gives us (at least formally!)$$ f^{(m)}(x) =\sum_{n=0}^\infty \frac{\sum_{j=1}^{m-1}\binom{m}{j} f^{(j)}(x+n) f^{(m-j)}(x+n)}{2^{n+1}\prod_{k=0}^n f(x+n)} $$ One should similarly be able to show uniform convergence of these series.

Now, observe that the formula $$ f(x+1) = f(x)^2 - x $$ allows us to explicitly compute $f(x+n)$ in terms of $f(x)$ for all $n\in \mathbb{N}$. Say $f(x+n) = g_n(f(x),x)$. Then $f(x+n) = f(x+n-1)^2 - x-n = g_{n-1}(f(x),x)^2 - x - n$, so we have the following explicit recursion for $g_n$:$$ g_n(y,x) = \begin{cases}y & n=0\\ g_{n-1}(y)^2 - x - n & n>0 \end{cases} $$ Thus our derivative formula actually becomes a differential equation:$$ f'(x) = \sum_{n=0}^\infty \frac1{2^{n+1}}\prod_{k=0}^n\frac1{g_k(f(x),x)} $$ This is not extremely helpful, but it is mildly interesting! It will also imply that $f$ is the unique differentiable function satisfying $f(x)^2 = x+f(x+1)$ with its initial value $f(0)=\sqrt{1+\sqrt{2+\sqrt{3+\cdots}}}$.


Update: A sort-of solution. The root is $$ r = - \sqrt{1 + \sqrt{2 + \sqrt{3 + \sqrt{4 + \cdots} - \sqrt{ 1 + \cdots}} - \sqrt{1 + \sqrt{2 + \cdots} - \sqrt{1+\cdots}}} - \sqrt{1 + \sqrt{2 + \sqrt{3 + \cdots} - \sqrt{1 + \cdots}} - \sqrt{1 + \sqrt{2 + \cdots} - \sqrt{1+\cdots}}}} $$ In case it's not clear what the pattern is, any time you see $n+\cdots$ read it recursively as $$n+\cdots=n+\sqrt{n+1 + \cdots} - \sqrt{1+\cdots}$$

To see how this is a solution, one can see from this recursive reading: \begin{eqnarray} \sqrt{1+r + \sqrt{2+r + \cdots}} = \sqrt{1 - \sqrt{1 + \cdots} + \sqrt{2 +\cdots}} = -r \end{eqnarray} and hence $$ f(r) = \sqrt{r + \sqrt{1+r + \sqrt{2+r+\cdots}}} = \sqrt{r - r} = 0 $$ A recursive way to find this root is to observe, using $f(x)^2 = x + f(x+1)$, one has $f(r+1) + r = 0$ or equivalently, $r+1 = 1-f(r+1)$. Thus $r+1$ is the unique fixed point of $x\to 1 - f(x)$. Hence you have formally$$ r+1 = 1-f(1-f(1-\cdots)) $$ In fact the fixed point is attracting; one can show inductively that $f_n'(1) < 1$ for all $n$, and hence $f'(1)<1$. Using the fact that $r<-1$ and that $f'$ is decreasing, one has $$ f'(r+1) = \frac{1+f'(r+2)}{2f(r+1)} = \frac{1+f'(r+2)}{-2r} \le \frac{1+f'(1)}{-2r} < \frac{1}{-r} < 1 $$ Thus the derivative of $1-f(x)$ at $x=r+1$ is less than $1$ in absolute value, so iterating it does converge to its fixed point. Defining a sequence $c_n$ recursively by $$ c_{n} = 1 - f_n(c_{n-1}) $$ with $c_0 = 1$, one obtains a sequence that converges to $r+1$. The first few terms:\begin{eqnarray} c_0 &=& 1\\ c_1 &=& 0 \\ c_2 &=& 1-\sqrt[4]{1+\sqrt2}\\ c_3 &=& 1-\sqrt{1-\sqrt[4]{1+\sqrt2}+\sqrt{2-\sqrt[4]{1+\sqrt2}+\sqrt{3-\sqrt[4]{1+\sqrt2}+\sqrt{4-\sqrt[4]{1+\sqrt2}}}}} \end{eqnarray} clearly this gets unwieldy pretty fast. I programed this recursion into Haskell and it seems to converge pretty fast. After $c_{65}$ on my computer shows the values oscillating between $-0.21103728351247164$ and $-0.21103728351247142$, so I guess the root is $$ r\approx -1.211037283512471\dots $$ which agrees with the numerical calculations from other users.

0
On

$\color{brown}{\textbf{The task standing.}}$

Let $-r$ is the root of $f(x).$

Since $f(x)$ is increasing function and $$f(-r)=0,\quad f(-1) > f_2(-1) = 0,$$ then $r>1.$

At the same time, \begin{cases} f(x+1) = f^2(x)-x\\ f(-r)=0\\ f(-r+1) = r\\ f(-r+2) = r^2+r-1\\ f(-r+3)= (r^2+r-1)^2 + r-2 = (r-1)(r^4+3r^2+2r+1)\dots.\tag1 \end{cases}

Besides, \begin{cases} f'(x+1) = 2f(x)f'(x)-1\\ f'(-r) = \infty\\ f'(-r+2) = 2rf'(-r+1)-1\\ f'(-r+3) = 2(r^2-r+1)f'(-r+2)-1\dots\tag2 \end{cases}

Conditions $(1),(2)$ allow to define unknown parameters in the various parametric representations of $f(x).$

$\color{brown}{\textbf{Inverse polynomial model.}}$

Let the inverse function is $f^{-1}(x)=g(y),$ then \begin{cases} g(0) = -r\\ g'(0)=0\\ g(r) = -r+1\\ g(r^2+r-1) = -r+2\\ g((r-1)(r^4+3r^2+2r+1)) = -r+3,\tag3 \end{cases} and the coefficients of the cubic approximating polynomial in the form of $$g(y)=-r+qy^2+py^3\tag4$$ can be obtained from the algebraic system \begin{cases} -r+1 = -r + pr^2 + qr^3\\ -r+2 = -r + p(r^2+r-1)^2 + q(r^2+r-1)^3\\ -r+3 = -r + p(r-1)^2(r^4+3r^2+2r+1)^2 + q(r-1)^3(r^4+3r^2+2r+1)^3, \end{cases} with the single positive solution $$p\approx 0.0622998,\quad q\approx 0.606587,\quad r\approx1.21088$$ (see also Wolfram Alpha results).

Therefore, the cubic model estimation of root of $f(x)$ is $\color{brown}{\mathbf{-1.21088}}.$

$\color{brown}{\textbf{Power model.}}$

Taking in account conditions $(1),$ can be used the explicit power model $$f(x) = r(x+r)^p,\tag5$$

where \begin{cases} r^2+r-1 = r\cdot 2^p\\ (r-1)(r^3+3r^2+2r+1) = r\cdot 3^p, \end{cases}

Power model parameters

The power model estimation of root of $f(x)$ is $\color{brown}{\mathbf{-1.21168}}.$

Plot of the inverse functions for the considered models see below.

Inverse functions

Using of detalized models can improve approximations accuracy.