Are there any known numerical ways (and if possible libraries in C) to solve this kind of differential equation?

83 Views Asked by At

I have been trying to mimic a paper's solution of some, probably not so hard, equations.

They say they solve them with 4th order Runge Kutta.

To my knowledge, I don't think this is possible, but here I show an example:

$$y'(t) = y(t) + \int_0^t \dfrac{1}{1+y(t)y(x)}dx$$

or generally a function $f(y(t),y(x))$ inside the integral.

I don't really understand how I could solve this numerically (Also, I can't bring the form of it to be an ODE.)

2

There are 2 best solutions below

0
On BEST ANSWER

The following code, calculates the ODE:

$$y'(t) = -\int^t_0 y(x) dx +1$$ with $y(0)=0$

which is equivelant to $$y''(t) = -y(t)$$ with $y'(0) =1$ and $y(0)=0$

and starts out correctly, but as time goes on, integration error adds up (probably due to method)

#include <stdio.h>
#include <math.h>
#include <gsl/gsl_integration.h>
#include <iostream>
#include <functional>
#include <vector>
#include <gsl/gsl_errno.h>
#include <gsl/gsl_matrix.h>
#include <gsl/gsl_odeiv2.h>

struct solve_params { std::vector<double> y0;
                      std::vector<double> dt;};


double integ_in(double t, const double y[],
      void *pp){

     struct solve_params * params = (struct solve_params *)pp;
     std::vector<double> y0       =  (params->y0);
     std::vector<double> dt        =  (params->dt);


  unsigned int vecSize = dt.size();
  //printf ("%.5e \n",y0[vecSize-1] );
  double result = 1;


  for(unsigned int i = 0; i < vecSize; i++)
  {
    // access value in the memory to which the pointer
    // is referencing
    result += -dt[i]*y0[i];
  }

  return result;
}




int
func (double t, const double y[], double f[],
      void *pp)
{
  f[0] = integ_in(t, y,pp);
  return GSL_SUCCESS;
}


template <typename TT, typename f_point> 
void RK (TT param_tt,  f_point myf, double init[], double t, double t1)
{
  gsl_odeiv2_system sys = {func, NULL, 1, &param_tt};

  const gsl_odeiv2_step_type * T
    = gsl_odeiv2_step_rkf45;
  int i;
  
    gsl_odeiv2_step * s
    = gsl_odeiv2_step_alloc (T, 1);
  gsl_odeiv2_control * c
    = gsl_odeiv2_control_y_new (1e-6, 0.0);
  gsl_odeiv2_evolve * e
    = gsl_odeiv2_evolve_alloc (1);
  double h = 1e-6;
  double y[1] = {init[0]};
  double told = t;
  while (t < t1)
    {
      int status = gsl_odeiv2_evolve_apply (e, c, s,
                                           &sys,
                                           &t, t1,
                                           &h, y);

      if (status != GSL_SUCCESS)
          break;

      printf ("%.5e %.5e %.5e %.5e\n",t ,h ,y[0], sin(t) );
      param_tt.dt.push_back(t-told);
      param_tt.y0.push_back(y[0]);
      told = t;
    }


  gsl_odeiv2_evolve_free (e);
  gsl_odeiv2_control_free (c);
  gsl_odeiv2_step_free (s);
  return ;
}

int main(int argc, char const *argv[])
{
  double t = 0.0;
  double t1 = 100;
  double y[1] = { 0.0 };
  double told = t;
  std::vector<double> y0;
  std::vector<double> tt;
  tt.push_back(told);
  y0.push_back(y[0]);
  solve_params temp;
  temp.y0=y0; 
  temp.dt=tt;
  RK(temp,func,y,t,t1);
  return 0;
}
5
On

Runge-Kutta is a little complicated so I'm going to illustrate the idea that the article you mention is probably using by some simple methods.

Let our integral-differential equation be $$y'(x)=F(x, y(x))+G(x, y(x))\int_0^xdt\,H(x, y(x), t, y(t))$$ and $h>0$ be some step size. Given $y(0)$ we can then approximate our solution by $y(x)\approx y_{\lfloor x / h\rfloor}$ for some sequence $\{y_n\}$. To that end, we approximate the derivative by a difference

$$y'(x)\approx \frac{y_{\lfloor x / h\rfloor+1}-y_{\lfloor x / h\rfloor}}{h}$$

and the integral by a numerical quadrature (in this case a simple rectangle rule)

$$\int_0^xdt\,f(y(t))\approx h\sum_{k=0}^{\lfloor x / h\rfloor}f(y_k)$$

so our differential equation becomes a one step recurrence relation for $y_n$ when we look at $x=nh$:

$$y_{n+1}\approx y_n+hF(nh, y_n)+h^2G(nh,y_n)\sum_{k=0}^nH(nh,y_n,kh,y_k),\quad y_0 = y(0).$$

These ideas generalize quite well to different derivative approximations (e.g. Runge-Kutta) and quadrature rules (e.g. trapezoids).

EDIT: To give an illustration, here's an example C implementation of this simple method:

void solve(
  double (*F)(double, double), double (*G)(double, double),
  double (*H)(double, double, double, double),
  double y0, double x, double h, double *y)
{
  int n = (int)(x / h);
  y[0] = y0;
  for(int i = 0; i < n - 1; i++) {
    y[i + 1] = y[i] + h * F(i * h, y[i]);
    for(int j = 0; j < n - 1; j++) {
       y[i + 1] += h * h * G(i * h, y[i]) * H(i * h, y[i], j * h, y[j]);
    } 
  }
} 

This will evolve the differential equation up to x, where you pass in a pointer y that has at least (x / h) * sizeof(double) bytes. The solution to your differential equation can then be read off in y. If instead of the whole function you just want its value at x (not sure why you'd want that), you can modify this implementation to return a double corresponding to $y(x)$ and do allocation/freeing of y at runtime instead of having it as an argument.