integrate_adaptive and integrate_times give different answers for negative step size

StackOverflow https://stackoverflow.com/questions/19125047

  •  30-06-2022
  •  | 
  •  

Question

I'm using the odeint library in Boost. When using the integrate_adaptive function, the results are as expected. However, when using integrate_times, the ODE is evaluated at very different times that are outside the range of integration. This is a problem for me because my ODE is not defined for some of the values that it is being evaluated at.

The code below demonstrates the issue. The x values for which the ODE is evaluated are printed to the screen.

#include <iostream>
#include <complex>
#include <vector>
#include <boost/numeric/odeint.hpp>

struct observe
{
    std::vector<std::vector<std::complex<double> > > & y;
    std::vector<double>& x_ode;

    observe(std::vector<std::vector<std::complex<double> > > &p_y, std::vector<double> &p_x_ode) : y(p_y), x_ode(p_x_ode) { };

    void operator()(const std::vector<std::complex<double> > &y_temp, double x_temp)
    {
        y.push_back(y_temp);
        x_ode.push_back(x_temp);
    }
};

class Direct
{
    std::complex<double> alpha;
    std::complex<double> beta;
    std::complex<double> R;
    std::vector<std::vector<std::complex<double> > > H0_create(const double y);

public:
    Direct(std::complex<double> p_alpha, std::complex<double> p_beta, double p_R) : alpha(p_alpha), beta(p_beta), R(p_R) { }

    void operator() (const std::vector<std::complex<double> > &y, std::vector<std::complex<double> > &dydx, const double  x)
    {
        std::vector<std::vector<std::complex<double> > > H0 = H0_create(x);

        for(int ii = 0; ii < 6; ii++)
        {
            dydx[ii] = 0.0;
            for(int jj = 0; jj < 6; jj++)
            {
                dydx[ii] += H0[ii][jj]*y[jj];
            }
        }
    }
};


std::vector<std::vector<std::complex<double> > > Direct::H0_create(const double x)
{
    std::complex<double> i = std::complex<double>(0.0,1.0);
    std::cout << x << std::endl;
    double U = sin(x*3.14159/2.0);
    double Ux = cos(x*3.14159/2.0);
    std::complex<double> S = alpha*alpha + beta*beta + i*R*alpha*U;

    std::vector<std::vector<std::complex<double> > > H0(6);
    for(int ii = 0; ii < 6; ii++)
    {
        H0[ii] = std::vector<std::complex<double> >(6);
    }

    H0[0][1] = 1.0;
    H0[1][0] = S;
    H0[1][2] = R*Ux;
    H0[1][3] = i*alpha*R;
    H0[2][0] = -i*alpha;
    H0[2][4] = -i*beta;
    H0[3][1] = -i*alpha/R;
    H0[3][2] = -S/R;
    H0[3][5] = -i*beta/R;
    H0[4][5] = 1.0;
    H0[5][3] = i*beta*R;
    H0[5][4] = S;

    return H0;
}

int main()
{
    int N = 10;

    double x0 = 1.0;
    double xf = 0.0;

    std::vector<double> x_ode(N);
    double delta_x0 = (xf-x0)/(N-1.0);
    for(int ii = 0; ii < N; ii++)
    {
        x_ode[ii] = x0 + ii*delta_x0;
    }
    x_ode[N-1] = xf;

    std::vector<std::vector<std::complex<double> > > y_temp;
    std::vector<double> x_temp;

    std::complex<double> i = std::complex<double>(0.0,1.0);
    std::complex<double> alpha = 0.001*i;
    double beta = 0.45;
    double R = 500.0;
    std::complex<double> lambda = -sqrt(alpha*alpha + beta*beta + i*R*alpha);

    // Define Initial Conditions
    std::vector<std::complex<double> > ICs = {1, lambda, -i*alpha/lambda,0,0,0};

    // Initialize ODE class
    Direct direct(alpha,beta,R);

    {
        using namespace boost::numeric::odeint;

        double abs_tol = 1.0e-10;
        double rel_tol = 1.0e-6;

        std::cout << "integrate_adaptive x values:\n";
        size_t steps1 = integrate_adaptive(make_controlled<runge_kutta_cash_karp54<std::vector<std::complex<double> > > >(abs_tol, rel_tol), direct, ICs, x0, xf, delta_x0, observe(y_temp,x_temp));

        std::cout << "\n\nintegrate_times x values:\n";
        size_t steps2 = integrate_times(make_controlled<runge_kutta_cash_karp54<std::vector<std::complex<double> > > >(abs_tol, rel_tol), direct, ICs, x_ode.begin(), x_ode.end(), delta_x0, observe(y_temp,x_temp));
    }

    return 0;
}

I am compiling and running by using these commands:

g++ main.cpp -std=C++11; ./a.out

The code produces this output:

integrate_adaptive x values:
1
0.977778
0.966667
0.933333
0.888889
0.902778
0.888889
0.849758
0.830193
0.771496
0.693235
0.717692
0.693235
0.654104
0.634539
0.575842
0.497581
0.522037
0.497581
0.45845
0.438885
0.380188
0.301927
0.326383
0.301927
0.262796
0.24323
0.184534
0.106273
0.130729
0.106273
0.0850181
0.0743908
0.042509
0
0.0132841


integrate_times x values:
1
0.977778
0.966667
0.933333
0.888889
0.902778
0.888889
0.84944
0.829716
0.770543
0.691645
0.716301
0.777778
0.738329
0.718605
0.659432
0.580534
0.60519
0.666667
0.627218
0.607494
0.54832
0.469423
0.494078
0.555556
0.512422
0.490855
0.426154
0.339886
0.366845
0.444444
0.397898
0.374625
0.304806
0.211714
0.240805
0.333333
0.281908
0.256196
0.179058
0.0762077
0.108348
0.222222
0.170797
0.145085
0.0679468
-0.0349035
-0.00276275
0.111111
0.059686
0.0339734
-0.0431643
-0.146015
-0.113874
0.111111
0.0671073
0.0451054
-0.0209003
-0.108908
-0.0814056

The range of integration is from x = 1 to 0 but the ODE is being evaluated at x values less than 0 when using integrate_times.

Was it helpful?

Solution

This is a bug in odeint due to the negative timesteps in your problem, I have created an issue on github: https://github.com/headmyshoulder/odeint-v2/issues/99 and I have implemented a fix. Please check out the latest odeint version from github and see if the problem remains. If so - feel free to open a new issue on github.

Thanks for pointing out that problem - and sorry for the bug.

Another note: I would suggest to use a dense-output stepper for the integrate_times routine as this is much more efficient (factor 2 in the best case). It basically does what you implemented as a fix above: using adaptive time-steps and interpolates at the intermediate points as required.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top