If the order of convergence is 1, does the asymptotic constant have to be less than 1?

669 Views Asked by At

I am self-studying basic numerical analysis using the textbook by Richard Burden and I am reading about the order of convergence. The author states that if the order of convergence is linear, then the asymptotic constant has to be less than 1, but for the sequence $1/n$ and $1/n^2$, they both converge to 0 linearly but the asymptotic constant are both 1 because the limit of $\frac{n}{n+1}$ is 1 and the same is true for the ratio of $1/(n+1)^2/1/n^2$. So is it true that the asymptotic constant for linear convergence can be 1?

1

There are 1 best solutions below

4
On BEST ANSWER

So is it true that the asymptotic constant for linear convergence can be 1?

No. But Burden's does not really explain why. The explanation follows below.

The order of convergence of any sequence can be defined using a special sequence. Specifically, let $p \ge 1$, $c \ge 0$ and choose $\epsilon_1 > 0$. Define $$ \epsilon_{n+1} = c \epsilon_n^p$$ for $n \in \mathbb{N}$. This sequence is interesting precisely because $$ \frac{\epsilon_{n+1}}{\epsilon_n^p} = c,$$ so that $$ \frac{\epsilon_{n+1}}{\epsilon_n^p} \rightarrow c, \quad n \rightarrow \infty, \quad n \in \mathbb{N}$$ is trivially true. It is straight forward to verify that $$ \forall n \in \mathbb{N} \: : \: \epsilon_n = c^{\psi(n)} \epsilon_1^{p^n}$$ where $$\psi(n) = \sum_{j=0}^{n-1} p^j = \begin{cases} n & p = 1 \\ \frac{p^n - 1}{p-1} & p > 1 \end{cases}. $$ The proof is by induction of $n$. Now in the case of $p=1$ we have $$\epsilon_n = c^n \epsilon_1.$$ It is clear that $$\epsilon_n \rightarrow 0, \quad n \rightarrow \infty, \quad n \in \mathbb{N}$$ if and only if $$c < 1.$$ While this it is not necessarily clear at this point, this is the reason why linear convergence requires that the asymptotic constant $c$ is strictly less than $1$. Conversely, if $p>1$, then $$\epsilon_n = c^{\psi(n)} \epsilon_1^{p^n} = \left( c \epsilon_1^{p-1} \right)^{\frac{p^n-1}{p-1}} \epsilon_1.$$ It is clear that $$\epsilon_n \rightarrow 0, \quad n \rightarrow \infty, \quad n \in \mathbb{N}$$ if and only if $$c \epsilon_1^{p-1}< 1.$$ In particular, the size of $c > 0$ is irrelevant, provided that $\epsilon_1$ is sufficiently small. This is the reason why the size of the asymptotic constant is irrelevant when defining superlinear convergence.

Now consider a general sequence $\{x_n\}_{n=1}^\infty \subset \mathbb{R}$. We say that converges towards $x \in \mathbb{R}$ with order at least $p \ge 1$ if there exists a strictly positive sequence $\{\epsilon_n\}_{n=1}^\infty$ and $c \ge 0$, such that $$ |x - x_n| \leq \epsilon_n,$$ and $$\frac{\epsilon_{n+1}}{\epsilon_n^p} \rightarrow c, \quad n \rightarrow \infty, \quad n \in \mathbb{N}.$$ In the case of $p=1$ we require that $c \in [0,1)$. With this definition it possible to show that the celebrated bisection algorithm converges to a root and the order of convergence is at least $p=1$.

Burden's and many other authors use a related definition, that is elegant in theory but quite demanding in practice. Specifically, we say that $\{x_n\}_{n=1}^\infty \subset \mathbb{R}$ converges towards $x \in \mathbb{R}$ and the order of convergence is $p \ge 1$ if there exists a constant $c > 0$ such that $$ \frac{|x - x_{n+1}|}{|x-x_n|^p} \rightarrow c, \quad n \rightarrow \infty, \quad n \in \mathbb{N}.$$ Once again, we require that $c < 1$ when $p=1$. Obviously, we must ensure that $$x_n \not = x$$ at least for $n$ sufficiently large, but this is not the main obstacle. In many practical applications, such as the bisection algorithm, we can easily find an upper bound for the absolute value of the error $$e_{n+1} = x - x_{n+1},$$ but we are hard pressed to find a lower bound for the absolute value of the error $$e_n = x - x_n.$$ In particular, this is impossible for the bisection method. Hence it is practically impossible to say anything sensible about the limit of the relevant fraction, i.e. $$\frac{|e_{n+1}|}{|e_n|^p}.$$ In such cases, we use the more flexible definition given above and settle for the weaker statement that the order of convergence is at least $p$ for the appropriate value of $p$.

But why do the sequences $1/n$ and $1/n^2$ have asymptotic constant 1 if they converge linearly?

They do not converge linearly, they converge sublinearly. If they have been identified as converging linearly, then the author has made a mistake.

It is important to appreciate the immense difference between such sequences as $$x_n = \frac{1}{n}$$ and $$y_n = 2^{-n}.$$ They both converge towards zero. They both converge slowly, but $y_n$ converges much faster than $x_n$. It is clear that $$x_{n+1} = \frac{n}{n+1} x_{n}$$ is only marginally smaller than $x_n$ when $n$ is large, but $$y_{n+1} = \frac{1}{2} y_n$$ is always significantly smaller than $y_n$ regardless of the value of $n$. Surely, these two sequence deserve to be classified differently?

In general, a sequence $\{x_n\}_{n=1}^\infty$ is a said to converge sublinearly to $x$ if

  1. It converges to $x$.
  2. We have $$\frac{|x-x_{n+1}|}{|x-x_n|} \rightarrow 1, \quad n \rightarrow \infty, \quad n \in \mathbb{N}$$

Thus, the sequence given by $x_n = \frac{1}{n}$ converges sublinearly to zero, while the sequence given by $y_n = 2^{-n}$ converges linearly to zero.