I am self-studying basic numerical analysis using the textbook by Richard Burden and I am reading about the order of convergence. The author states that if the order of convergence is linear, then the asymptotic constant has to be less than 1, but for the sequence $1/n$ and $1/n^2$, they both converge to 0 linearly but the asymptotic constant are both 1 because the limit of $\frac{n}{n+1}$ is 1 and the same is true for the ratio of $1/(n+1)^2/1/n^2$. So is it true that the asymptotic constant for linear convergence can be 1?
2026-03-28 14:00:26.1774706426
If the order of convergence is 1, does the asymptotic constant have to be less than 1?
669 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in CONVERGENCE-DIVERGENCE
- Finding radius of convergence $\sum _{n=0}^{}(2+(-1)^n)^nz^n$
- Conditions for the convergence of :$\cos\left( \sum_{n\geq0}{a_n}x^n\right)$
- Proving whether function-series $f_n(x) = \frac{(-1)^nx}n$
- Pointwise and uniform convergence of function series $f_n = x^n$
- studying the convergence of a series:
- Convergence in measure preserves measurability
- If $a_{1}>2$and $a_{n+1}=a_{n}^{2}-2$ then Find $\sum_{n=1}^{\infty}$ $\frac{1}{a_{1}a_{2}......a_{n}}$
- Convergence radius of power series can be derived from root and ratio test.
- Does this sequence converge? And if so to what?
- Seeking an example of Schwartz function $f$ such that $ \int_{\bf R}\left|\frac{f(x-y)}{y}\right|\ dy=\infty$
Related Questions in NUMERICAL-METHODS
- The Runge-Kutta method for a system of equations
- How to solve the exponential equation $e^{a+bx}+e^{c+dx}=1$?
- Is the calculated solution, if it exists, unique?
- Modified conjugate gradient method to minimise quadratic functional restricted to positive solutions
- Minimum of the 2-norm
- Is method of exhaustion the same as numerical integration?
- Prove that Newton's Method is invariant under invertible linear transformations
- Initial Value Problem into Euler and Runge-Kutta scheme
- What are the possible ways to write an equation in $x=\phi(x)$ form for Iteration method?
- Numerical solution for a two dimensional third order nonlinear differential equation
Related Questions in NUMERICAL-CALCULUS
- Why does $\frac{X}{b}=d_n b^{n-1}+\ldots+ d_1+ \frac{d_0}{b}$ imply $d_0$ is the remainder?
- Solving this double integral numerically
- Scaling differentiation matrices
- Gaussian Quadrature and Error $O(h^4)$
- Approximate the given integral using Gaussian quadrature
- Bisection Method - Number of steps required proof
- Solving a Matrix-Differential equation with e.g. ode45
- Numerical analysis: Three point method
- How to compute this integral over a 2D subspace?
- Determine an interpolation polynomial
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
No. But Burden's does not really explain why. The explanation follows below.
The order of convergence of any sequence can be defined using a special sequence. Specifically, let $p \ge 1$, $c \ge 0$ and choose $\epsilon_1 > 0$. Define $$ \epsilon_{n+1} = c \epsilon_n^p$$ for $n \in \mathbb{N}$. This sequence is interesting precisely because $$ \frac{\epsilon_{n+1}}{\epsilon_n^p} = c,$$ so that $$ \frac{\epsilon_{n+1}}{\epsilon_n^p} \rightarrow c, \quad n \rightarrow \infty, \quad n \in \mathbb{N}$$ is trivially true. It is straight forward to verify that $$ \forall n \in \mathbb{N} \: : \: \epsilon_n = c^{\psi(n)} \epsilon_1^{p^n}$$ where $$\psi(n) = \sum_{j=0}^{n-1} p^j = \begin{cases} n & p = 1 \\ \frac{p^n - 1}{p-1} & p > 1 \end{cases}. $$ The proof is by induction of $n$. Now in the case of $p=1$ we have $$\epsilon_n = c^n \epsilon_1.$$ It is clear that $$\epsilon_n \rightarrow 0, \quad n \rightarrow \infty, \quad n \in \mathbb{N}$$ if and only if $$c < 1.$$ While this it is not necessarily clear at this point, this is the reason why linear convergence requires that the asymptotic constant $c$ is strictly less than $1$. Conversely, if $p>1$, then $$\epsilon_n = c^{\psi(n)} \epsilon_1^{p^n} = \left( c \epsilon_1^{p-1} \right)^{\frac{p^n-1}{p-1}} \epsilon_1.$$ It is clear that $$\epsilon_n \rightarrow 0, \quad n \rightarrow \infty, \quad n \in \mathbb{N}$$ if and only if $$c \epsilon_1^{p-1}< 1.$$ In particular, the size of $c > 0$ is irrelevant, provided that $\epsilon_1$ is sufficiently small. This is the reason why the size of the asymptotic constant is irrelevant when defining superlinear convergence.
Now consider a general sequence $\{x_n\}_{n=1}^\infty \subset \mathbb{R}$. We say that converges towards $x \in \mathbb{R}$ with order at least $p \ge 1$ if there exists a strictly positive sequence $\{\epsilon_n\}_{n=1}^\infty$ and $c \ge 0$, such that $$ |x - x_n| \leq \epsilon_n,$$ and $$\frac{\epsilon_{n+1}}{\epsilon_n^p} \rightarrow c, \quad n \rightarrow \infty, \quad n \in \mathbb{N}.$$ In the case of $p=1$ we require that $c \in [0,1)$. With this definition it possible to show that the celebrated bisection algorithm converges to a root and the order of convergence is at least $p=1$.
Burden's and many other authors use a related definition, that is elegant in theory but quite demanding in practice. Specifically, we say that $\{x_n\}_{n=1}^\infty \subset \mathbb{R}$ converges towards $x \in \mathbb{R}$ and the order of convergence is $p \ge 1$ if there exists a constant $c > 0$ such that $$ \frac{|x - x_{n+1}|}{|x-x_n|^p} \rightarrow c, \quad n \rightarrow \infty, \quad n \in \mathbb{N}.$$ Once again, we require that $c < 1$ when $p=1$. Obviously, we must ensure that $$x_n \not = x$$ at least for $n$ sufficiently large, but this is not the main obstacle. In many practical applications, such as the bisection algorithm, we can easily find an upper bound for the absolute value of the error $$e_{n+1} = x - x_{n+1},$$ but we are hard pressed to find a lower bound for the absolute value of the error $$e_n = x - x_n.$$ In particular, this is impossible for the bisection method. Hence it is practically impossible to say anything sensible about the limit of the relevant fraction, i.e. $$\frac{|e_{n+1}|}{|e_n|^p}.$$ In such cases, we use the more flexible definition given above and settle for the weaker statement that the order of convergence is at least $p$ for the appropriate value of $p$.
They do not converge linearly, they converge sublinearly. If they have been identified as converging linearly, then the author has made a mistake.
It is important to appreciate the immense difference between such sequences as $$x_n = \frac{1}{n}$$ and $$y_n = 2^{-n}.$$ They both converge towards zero. They both converge slowly, but $y_n$ converges much faster than $x_n$. It is clear that $$x_{n+1} = \frac{n}{n+1} x_{n}$$ is only marginally smaller than $x_n$ when $n$ is large, but $$y_{n+1} = \frac{1}{2} y_n$$ is always significantly smaller than $y_n$ regardless of the value of $n$. Surely, these two sequence deserve to be classified differently?
In general, a sequence $\{x_n\}_{n=1}^\infty$ is a said to converge sublinearly to $x$ if
Thus, the sequence given by $x_n = \frac{1}{n}$ converges sublinearly to zero, while the sequence given by $y_n = 2^{-n}$ converges linearly to zero.