Results better without adaptive $h$ than with adaptive $h$ when using RKF45?

333 Views Asked by At

I'm trying to solve a fairly simple ODE,

$$-y'=t^{-2} +4(t-6)e^{-2(t-6)^2} ,~~~ y(1)=1~\text{ for }~t\in[0,10].$$

Via the Runge-Kutta-Fehlberg method. I don't understand why, but the solutions i get when i graph the approximations with and without adaptivetoggle are off (the method that uses an adaptive step size should in theory give more precise results). On top of that, my errors in each calculation should be something, i don't believe they can just be 0 all the time... This is the graph i'm getting, the errors just say 0 always

I'm running the following code , but I can't spot where i went wrong!!!

import matplotlib.pyplot as plt
import numpy as np
def rk45(f,t0,tf,y0,ε, h , adaptivetoggle):

   t = t0; y = y0 # we initiate the loop variable t
   valuesy = []
   valuest = []
   valuesh = []
   valuesErr = []
   while t < tf :
        valuesh.append(h)
        #Defining the runge kutta functions 
        f1 = f(t,                 y                                                                                           )
        f2 = f(t + (1/4)*h,       y + h*(1/4)*f1                                                                              )
        f3 = f(t + (3/8)*h,       y + h*(3/32)*f1      + h*(9/32)*f2                                                          )
        f4 = f(t + (12/13)*h,     y + h*(1932/2197)*f1 - h*(7200/2197)*f2 + h*(7296/2197)*f3                                  )
        f5 = f(t + h,             y + h*(439/216)*f1   - h*8*f2           + h*(3680/513)*f3  - h*(845/4104)*f4                )
        f6 = f(t + h/2,           y - h*(8/27)*f1      + h*2*f2           - h*(3544/2565)*f3 + h*(1859/4104)*f4 - h*(11/40)*f5 )
       #Now we calculate the solutions for the order 4 and 5 approximation
        y4 = y + h*((25/216)*f1 + (1408/2565)*f3  + (2197/4104)*f4    -(1/5)*f5            )
       # y5 = y + h*((16/135)*f1 + (6656/12825)*f3 + (28561/56430)*f4  -(9/50)*f5 +(2/55)*f6)        
        #With this toggle we choose whether to use an adaptive h or not
        if adaptivetoggle == True :
           Error = abs((1/360)*f1 - (128/4275)*f3 - (2197/75240)*f4 + (1/50)*f5 + (2/55)*f6)
           valuesErr.append(Error)
           hnew  = 0.9*h*((ε)/Error)**(0.25)
           # We decide if we should re-calculate the integration step ( if its precision is too bad)
           if hnew < h: 
              h  = hnew      
           if hnew >= h :             
              y = y4
              valuesy.append(y)
              valuest.append(t)
              t+=h #increment the loop
              h=hnew
       
        else :#if we dont use adaptiveh
           y = y4 
           valuesy.append(y)
           t+=h #increment the loop
           valuest.append(t)
   return valuest, valuesy, valuesh, valuesErr

#We define the function to feed it to our RK , and also the analytical solution, to plot vs our approximation
def f(t,y):
    return -1/t**2-4*(t-6)*np.exp(-2*(t-6)**2)

def anasol(x):
    return -1/np.exp(50)+np.exp(-2*(x-6)**2)+1/x
dt = np.linspace(0,10,1000)[1:]


plt.plot(dt,anasol(dt),label='Analytical solution')
plt.scatter(rk45(f,1,10,1,10**-6, 0.2 , True)[0],rk45(f,1,10,1,10**-6, 0.2 , True)[1], label = 'RKF45+adaptiveh', color="tab:red", s=1)
plt.scatter(rk45(f,1,10,1,10**-6, 0.2 , False)[0],rk45(f,1,10,1,10**-6, 0.2 , False)[1], label = 'RKF45+constanth', color="tab:pink", s=6)
plt.scatter(rk45(f,1,10,1,10**-6, 0.2 , True)[0],rk45(f,1,10,1,10**-6, 0.2 , True)[2], label = 'h',color="tab:green", s=1)
plt.xlabel('t')
plt.ylabel('y')
plt.ylim(0,1.5)
plt.xlim(0.5,10.5)
plt.title('RKF')
plt.legend()
plt.grid()
plt.show()

#We now plot error vs t
plt.scatter(rk45(f,1,10,1,10**-6, 0.2 , True)[0],rk45(f,1,10,1,10**-6, 0.2 , True)[3], label = 'Error' )
#plt.xlabel('t')
#plt.ylabel('Error')
#plt.title('Error')
#plt.legend()
#plt.grid()
#plt.show()
1

There are 1 best solutions below

0
On

My proposed changes to the time loop are

   while t < tf :
        v4,v5,err = rk45update(f,t,y,h)
        #With this toggle we choose whether to use an adaptive h or not
        if adaptivetoggle == True :
            Error = abs(err)
            ratio = (ε/Error)**0.25
            ratio = min(2.4,max(0.05,ratio))
            # We decide if we should re-calculate the integration step ( if its precision is too bad)
            if ratio >= 0.8 :             
                y = y+h*v4
                t+=h #increment the loop
                valuesy.append(y)
                valuest.append(t)
                valuesErr.append(Error/ε)
                valuesh.append(h)
            h=0.9*h*ratio

I took out the method calculation so that the essentials of the time loop are better visible. v4,v5 are the combined slopes of the 4th and 5th order methods, err is their difference computed as separate slope combination, as done in the original post.

Instead of hnew I take a ratio factor that still expresses if the error estimate is above or below the tolerance. This allows to further cut off too radical changes in step size in a uniform manner.

The acceptance threshold sometimes plays a role, here a higher threshold causes at a handful of steps a step size that is too small, with a drastically reduced step error in the next step.

Accepting a step is now just simply advancing the state and registering the new point. There is no extra action required for rejecting a step, in any case a new optimal step size is computed.

The remaining code is unchanged, except some small tweaks in the plot presentation. The result has no visual deviations from the exact solution in both variants. The estimated unit step error relative to the error tolerance remains comfortably in the band $0.2..2$ with a further falling tendency towards the end of the interval due to the upper bound on step size growth.

solution plots and step sizes errors