When I read about finite difference methods (or really any approximation method), truncation error is often central to the discussion, and rightfully so. But it is also most often discussed in the context of consistency/convergence where the step size decreases. My question is, what happens when you go the other way? Instead of making the step size smaller, lets make it larger! Does the idea of "leading truncation error" go out the window, in which the higher order terms dominate the error?
2026-03-25 17:27:09.1774459629
Truncation error with growing step size
1.5k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in NUMERICAL-METHODS
- The Runge-Kutta method for a system of equations
- How to solve the exponential equation $e^{a+bx}+e^{c+dx}=1$?
- Is the calculated solution, if it exists, unique?
- Modified conjugate gradient method to minimise quadratic functional restricted to positive solutions
- Minimum of the 2-norm
- Is method of exhaustion the same as numerical integration?
- Prove that Newton's Method is invariant under invertible linear transformations
- Initial Value Problem into Euler and Runge-Kutta scheme
- What are the possible ways to write an equation in $x=\phi(x)$ form for Iteration method?
- Numerical solution for a two dimensional third order nonlinear differential equation
Related Questions in TAYLOR-EXPANSION
- Mc Laurin and his derivative.
- Maclaurin polynomial estimating $\sin 15°$
- why can we expand an expandable function for infinite?
- Solving a limit of $\frac{\ln(x)}{x-1}$ with taylor expansion
- How to I find the Taylor series of $\ln {\frac{|1-x|}{1+x^2}}$?
- Proving the binomial series for all real (complex) n using Taylor series
- Taylor series of multivariable functions problem
- Taylor series of $\frac{\cosh(t)-1}{\sinh(t)}$
- The dimension of formal series modulo $\sin(x)$
- Finding Sum of First Terms
Related Questions in APPROXIMATION-THEORY
- Almost locality of cubic spline interpolation
- Clarification for definition of admissible: $\Delta\in (K)$
- Best approximation of a function out of a closed subset
- Approximation for the following integral needed
- approximate bijective function such that the inverses are bijective and "easily" computable
- Approximating $\frac{\frac{N}{2}!\frac{N}{2}!}{(\frac{N}{2}-m)!(\frac{N}{2}+m)!}$ without using logs
- Prove that a set is not strictly convex
- Uniform approximation of second derivative via Bernstein polynomial
- Show that there exists 2 different best approximations
- Zolotarev number and commuting matrices
Related Questions in FINITE-DIFFERENCES
- 4-point-like central finite difference for second partial derivatives
- Numerical method for fourth order PDE.
- Finite difference approximation of $u''(x)+u(x)=0, u(0)=1, u(\pi)=-1$
- Numerically compute Laplacian of a scalar field in a non-orthogonal grid
- write second order difference as a convolution operator
- Do the second differences of the fifth powers count the sphere packing of a polyhedron?
- Discretization for $\partial_tu = \partial_x[g\times\partial_xu]$
- Discretization of 4th order ODE
- Trying to use Matlab to find Numerical Solution to $u''(x)+e^{u(x)}=0, u(0)=0, u(1)=0$ - Newton's method
- Finite difference: problem on edge of Dirichlet and Neumann boundary
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Yes, in $c_1h^p+c_2h^{p+1}+...$, the second term will dominate the first one for $h>\frac{c_1}{c_2}$.
In numerical applications, the many steps required by smaller step sizes eventually accumulate floating point noise sufficient to dominate the truncation error, so that a loglog plot of error vs. step size has a V shape with a fuzzy left leg, a middle piece on the right leg that is linearly raising and then some curved section for large $h$.
For a non-linear test problem $F[y]=F[p]$ with $F[y]=y''+1.16\sin(y)$ with exact solution $y(t)=p(t)=\cos(t)$ over the interval $[0,10]$ and using the 4th order classical Runge-Kutta method, this can look like this
The main error trends are first the accumulated floating point errors proportional to $\mu\frac{T}{h}$ where $\mu$ is the machine constant and $T$ the length of the integration interval, so that $T/h$ is the number of steps. And second the global error of the method proportional to $h^4$. A good fit was found with $h\mapsto\frac{10^{-15}}h+0.03\cdot h^4$.
Adding further higher order terms allows to reproduce the non-linear shape for larger $h$. Playing with the coefficients, a good fit was found manually with $\frac{10^{-15}}h+0.03\cdot h^4-0.08\cdot h^5+0.0225\cdot h^6$.