Why is $$h = \epsilon \cdot(1 + |x|)$$ a good step size in approximating numerically the derivative of a function $f:\mathbb{R} \to \mathbb{R}$ with $$\frac{f(x+h)-f(x)}{h}?$$
2026-04-03 06:04:28.1775196268
Step size $h$ in the incremental ratio approximation of the derivative
1.8k Views Asked by user411609 https://math.techqa.club/user/user411609/detail At
1
There are 1 best solutions below
Related Questions in NUMERICAL-METHODS
- The Runge-Kutta method for a system of equations
- How to solve the exponential equation $e^{a+bx}+e^{c+dx}=1$?
- Is the calculated solution, if it exists, unique?
- Modified conjugate gradient method to minimise quadratic functional restricted to positive solutions
- Minimum of the 2-norm
- Is method of exhaustion the same as numerical integration?
- Prove that Newton's Method is invariant under invertible linear transformations
- Initial Value Problem into Euler and Runge-Kutta scheme
- What are the possible ways to write an equation in $x=\phi(x)$ form for Iteration method?
- Numerical solution for a two dimensional third order nonlinear differential equation
Related Questions in APPROXIMATION
- Does approximation usually exclude equality?
- Approximate spline equation with Wolfram Mathematica
- Solving Equation with Euler's Number
- Approximate derivative in midpoint rule error with notation of Big O
- An inequality involving $\int_0^{\frac{\pi}{2}}\sqrt{\sin x}\:dx $
- On the rate of convergence of the central limit theorem
- Is there any exponential function that can approximate $\frac{1}{x}$?
- Gamma distribution to normal approximation
- Product and Quotient Rule proof using linearisation
- Best approximation of a function out of a closed subset
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
The premise of the question is incorrect: The choice of $h = \epsilon(1+|x|)$ is much too small.
There are two sources of error in approximating a derivative. The first comes from calculating it as a difference quotient, and it is easily seen that the error from this is \begin{align*} \left| \frac{f(x+h) - f(x)}{h} - f'(x) \right| \le \frac{h}{2}|f''(x)| \end{align*} However, we cannot calculate $f(x)$ and $f(x+h)$, we calculate some representable numbers close to them (call them $\hat{f}(x+h)$ and $\hat{f}(x)$). For floating point arithmetic with a guard digit we know that $\hat{f}(x+h) = f(x+h)(1+\epsilon_{1})$ and $\hat{f}(x) = f(x)(1+\epsilon_{2})$, where $|\epsilon_1|, |\epsilon_2| \le \epsilon$, where $\epsilon$ is the unit roundoff.1 So the total error is \begin{align*} \left| \frac{\hat{f}(x+h) - \hat{f}(x)}{h} - f'(x) \right| &= \left| \frac{f(x+h)(1+\epsilon_1) - f(x)(1+\epsilon_2)}{h} - f'(x) \right| \\ &\le\frac{h}{2}|f''(x)| + \frac{1}{h}(|f(x+h)|+|f(x)|)\epsilon =: g(h) \end{align*} We can see that taking $h$ as the smallest representable is likely to lead to castatrophe as the error from the roundoff of the function evaluation becomes unbounded. Instead we want to minimize the maximum error $g(h)$. You can differentiate $g$ and set it to zero to discover that the choice of $h$ that minimizes the maximum error is \begin{align*} \sqrt{\frac{2(|f(x+h)| + |f(x)|)\epsilon}{|f''(x)|}} \end{align*} For this choice of $h$, $g(h) = \sqrt{\epsilon}(|f''(x)| + (|f(x+h)| + |f(x)|)/2)$. Since we do not know $f''(x)$ (and cannot compute it for less than the cost of using a higher order method) then we can choose \begin{align*} h = 2\sqrt{\epsilon} \end{align*} This choice preserves the $O(\sqrt{\epsilon})$ error estimate irrespective of the ratio $(|f(x+h)| + |f(x)|)/|f''(x)|$.
There is a pathological case: If $x$ is so large that $x + 2\sqrt{\epsilon} == x$, then our choice of $h$ calculates the derivative to be precisely zero. Then the correct choice of $h$ is
h = float_next(x) - x, which is roughly $|x|\epsilon$. However, in this regime $|x|\epsilon \gg 2\sqrt{\epsilon}$. The error is then dominated by the fact that the distance between floats increases as $|x|$ increases.All the previous analysis depends on $x+h$ being representable. If $x+h$ is not representable, then you can show that the maximum error is \begin{align*} \frac{h}{2}|f''(x)| + \frac{1}{h}(|f(x+h)|+|f(x)| + |xf'(x)|)\epsilon \end{align*} but this does not change the conclusion that we should choose $h \sim O(\sqrt{\epsilon})$.
As to choosing $h = \epsilon(1+|x|)$, we can see that the error of this choice is \begin{align*} \epsilon(1+|x|)|f''(x)|/2 + \frac{|f(x+h)| + |f(x)|}{1+|x|} \approx \frac{2|f(x)|}{1+|x|} \end{align*} So the error is enormous.
1 Note that this assumes that the algorithm to compute our function produces the best representable estimate for the given precision. For C standard library functions there are sometimes guarantees that this is the case but most special function libraries do not compute to this accuracy as the compute time to extract the last few digits of precision represent the bulk of the computation time. Hence we should assume that $\hat{f}(x) \approx f(x)(1+k\epsilon)$ for some $k \in \mathbb{N}$.