I was recently watching a tutorial on Euler's method for approximating differential equations, and the whole time I was thinking "why can't you just take the limit of the step size $h$ as it goes to $0$, and get an exact or near-exact approximation of the differential equation?" So that is my question: is it possible to do that?
2026-04-13 21:46:09.1776116769
With Euler's method for differential equations, is it possible to take the limit as $h \to 0$ and get an exact approximation?
3.4k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in CALCULUS
- Equality of Mixed Partial Derivatives - Simple proof is Confusing
- How can I prove that $\int_0^{\frac{\pi}{2}}\frac{\ln(1+\cos(\alpha)\cos(x))}{\cos(x)}dx=\frac{1}{2}\left(\frac{\pi^2}{4}-\alpha^2\right)$?
- Proving the differentiability of the following function of two variables
- If $f ◦f$ is differentiable, then $f ◦f ◦f$ is differentiable
- Calculating the radius of convergence for $\sum _{n=1}^{\infty}\frac{\left(\sqrt{ n^2+n}-\sqrt{n^2+1}\right)^n}{n^2}z^n$
- Number of roots of the e
- What are the functions satisfying $f\left(2\sum_{i=0}^{\infty}\frac{a_i}{3^i}\right)=\sum_{i=0}^{\infty}\frac{a_i}{2^i}$
- Why the derivative of $T(\gamma(s))$ is $T$ if this composition is not a linear transformation?
- How to prove $\frac 10 \notin \mathbb R $
- Proving that: $||x|^{s/2}-|y|^{s/2}|\le 2|x-y|^{s/2}$
Related Questions in ORDINARY-DIFFERENTIAL-EQUATIONS
- The Runge-Kutta method for a system of equations
- Analytical solution of a nonlinear ordinary differential equation
- Stability of system of ordinary nonlinear differential equations
- Maximal interval of existence of the IVP
- Power series solution of $y''+e^xy' - y=0$
- Change of variables in a differential equation
- Dimension of solution space of homogeneous differential equation, proof
- Solve the initial value problem $x^2y'+y(x-y)=0$
- Stability of system of parameters $\kappa, \lambda$ when there is a zero eigenvalue
- Derive an equation with Faraday's law
Related Questions in LIMITS
- How to prove $\lim_{n \rightarrow\infty} e^{-n}\sum_{k=0}^{n}\frac{n^k}{k!} = \frac{1}{2}$?
- limit points at infinity
- Calculating the radius of convergence for $\sum _{n=1}^{\infty}\frac{\left(\sqrt{ n^2+n}-\sqrt{n^2+1}\right)^n}{n^2}z^n$
- Maximal interval of existence of the IVP
- Divergence of power series at the edge
- Compute $\lim_{x\to 1^+} \lim_{n\to\infty}\frac{\ln(n!)}{n^x} $
- why can we expand an expandable function for infinite?
- Infinite surds on a number
- Show that f(x) = 2a + 3b is continuous where a and b are constants
- If $a_{1}>2$and $a_{n+1}=a_{n}^{2}-2$ then Find $\sum_{n=1}^{\infty}$ $\frac{1}{a_{1}a_{2}......a_{n}}$
Related Questions in NUMERICAL-METHODS
- The Runge-Kutta method for a system of equations
- How to solve the exponential equation $e^{a+bx}+e^{c+dx}=1$?
- Modified conjugate gradient method to minimise quadratic functional restricted to positive solutions
- Minimum of the 2-norm
- Is method of exhaustion the same as numerical integration?
- Prove that Newton's Method is invariant under invertible linear transformations
- Initial Value Problem into Euler and Runge-Kutta scheme
- What are the possible ways to write an equation in $x=\phi(x)$ form for Iteration method?
- Numerical solution for a two dimensional third order nonlinear differential equation
- Error Bound using Stirling's approximation
Related Questions in APPROXIMATION
- Does approximation usually exclude equality?
- Approximate spline equation with Wolfram Mathematica
- Solving Equation with Euler's Number
- Approximate derivative in midpoint rule error with notation of Big O
- An inequality involving $\int_0^{\frac{\pi}{2}}\sqrt{\sin x}\:dx $
- On the rate of convergence of the central limit theorem
- Is there any exponential function that can approximate $\frac{1}{x}$?
- Gamma distribution to normal approximation
- Product and Quotient Rule proof using linearisation
- Best approximation of a function out of a closed subset
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Not really. Recall that the Euler method for $y^\prime=f(x,y)$ takes the form
$$\begin{align*}x_{k+1}&=x_k+h\\y_{k+1}&=y_k+hf(x_k,y_k)\end{align*}$$
for some stepsize $h$. Taking $h=0$ here is equivalent to not moving at all!
However, your idea can be made slightly more practical. Consider the application of the Euler method with stepsize $h/2$:
$$\begin{align*}x_{k+1/2}&=x_k+h/2\\y_{k+1/2}&=y_k+hf(x_k,y_k)/2\\x_{k+1}&=x_{k+1/2}+h/2\\y_{k+1}&=y_{k+1/2}+hf(x_{k+1/2},y_{k+1/2})/2\end{align*}$$
The value of $y_{k+1}$ from these two steps can usually be expected to be a bit more accurate that the value of $y_{k+1}$ from the $h$-step Euler method. We can keep playing the game, taking 4 steps with stepsize $h/4$, 8 steps with stepsize $h/8$, and so on, yielding a sequence of estimates for $y_{k+1}$ corresponding to decreasing $h$.
One way one might estimate the result of what happens when $h\to 0$ is to take all those estimates of $y_{k+1}$ along with the associated stepsizes, and then fit an interpolating polynomial to those. For example, taking $y_{k+1}^{(0)}$ to be the result for stepsize $h$, $y_{k+1}^{(1)}$ the corresponding result for stepsize $h/2$, and $y_{k+1}^{(2)}$ the result for stepsize $h/4$, one can fit a quadratic interpolating polynomial to the three points $\{(h,y_{k+1}^{(0)}),(h/2,y_{k+1}^{(1)}),(h/4,y_{k+1}^{(2)})\}$, and then estimate the limit as $h\to 0$ by evaluating the interpolating polynomial thus obtained at 0.
This scheme of using the interpolating polynomial to estimate the limit to 0 is called Richardson extrapolation. In practice, one certainly takes more than three points for the interpolation, and the order of the interpolating polynomial needed is estimated based on the behavior at past points. The idea of using Richardson extrapolation is due to Roland Bulirsch and Josef Stoer. The Bulirsch-Stoer method discussed here uses a slightly different method for integrating the differential equation (the modified midpoint method) from which the extrapolations are built (as well as a slightly modified extrapolation method), but is essentially the same idea as presented here.
Here's a tiny Mathematica demonstration of the Bulirsch-Stoer idea:
{-0.235759, -0.102946, -0.0485411, -0.0236106}Here we tried the Euler method on the differential equation $y^\prime=x-y$ with initial condition $y(0)=1$ for the stepsizes $h=1/2,1/4,1/8,1/16$, and compared the result at $x=1$ with the true value $y(1)=2/e$. As you can see the accuracy isn't too good for any of these.
Here's what happens after Richardson extrapolation using those same results from Euler:
0.0000823142We end up with a result good to three or so digits, much better than even the result of Euler corresponding to $h=1/256$. Pretty good, I would say, considering that only $2+4+8+16=30$ evaluations of $f(x,y)=x-y$ were needed for a result with three-digit accuracy, while Euler with $256$ steps (and thus $256$ evaluations of $f(x,y)$) can only manage two good digits.
As an aside, Mathematica implements Bulirsch-Stoer internally in
NDSolve[], as the optionMethod -> "Extrapolation".