문제

I'm implementing a very simple Susceptible-Infected-Recovered model with a steady population for an idle side project - normally a pretty trivial task. But I'm running into solver errors using either PysCeS or SciPy, both of which use lsoda as their underlying solver. This only happens for particular values of a parameter, and I'm stumped as to why. The code I'm using is as follows:

import numpy as np
from pylab import *
import scipy.integrate as spi

#Parameter Values
S0 = 99.
I0 = 1.
R0 = 0.
PopIn= (S0, I0, R0)
beta= 0.50     
gamma=1/10.  
mu = 1/25550.
t_end = 15000.
t_start = 1.
t_step = 1.
t_interval = np.arange(t_start, t_end, t_step)

#Solving the differential equation. Solves over t for initial conditions PopIn
def eq_system(PopIn,t):
    '''Defining SIR System of Equations'''
    #Creating an array of equations
    Eqs= np.zeros((3))
    Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
    Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
    Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
    return Eqs

SIR = spi.odeint(eq_system, PopIn, t_interval)

This produces the following error:

 lsoda--  at current t (=r1), mxstep (=i1) steps   
       taken on this call before reaching tout     
      In above message,  I1 =       500
      In above message,  R1 =  0.7818108252072E+04
Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.

Normally when I encounter a problem like that, there's something terminally wrong with the equation system I set up, but I both can't see anything wrong with it. Weirdly, it also works if you change mu to something like 1/15550. In case it was something wrong with the system, I also implemented the model in R as follows:

require(deSolve)

sir.model <- function (t, x, params) {
  S <- x[1]
  I <- x[2]
  R <- x[3]
  with (
    as.list(params),
{
    dS <- -beta*S*I/(S+I+R) - mu*S + mu*(S+I+R)
    dI <- beta*S*I/(S+I+R) - gamma*I - mu*I
    dR <- gamma*I - mu*R
  res <- c(dS,dI,dR)
  list(res)
}
  )
}

times <- seq(0,15000,by=1)
params <- c(
 beta <- 0.50,
 gamma <- 1/10,
 mu <- 1/25550
)

xstart <- c(S = 99, I = 1, R= 0)

out <- as.data.frame(lsoda(xstart,times,sir.model,params))

This also uses lsoda, but seems to be going off without a hitch. Can anyone see what's going wrong in the Python code?

도움이 되었습니까?

해결책

I think that for the parameters you've chosen you're running into problems with stiffness - due to numerical instability the solver's step size is getting pushed into becoming very small in regions where the slope of the solution curve is actually quite shallow. The Fortran solver lsoda, which is wrapped by scipy.integrate.odeint, tries to switch adaptively between methods suited to 'stiff' and 'non-stiff' systems, but in this case it seems to be failing to switch to stiff methods.

Very crudely you can just massively increase the maximum allowed steps and the solver will get there in the end:

SIR = spi.odeint(eq_system, PopIn, t_interval,mxstep=5000000)

A better option is to use the object-oriented ODE solver scipy.integrate.ode, which allows you to explicitly choose whether to use stiff or non-stiff methods:

import numpy as np
from pylab import *
import scipy.integrate as spi

def run():
    #Parameter Values
    S0 = 99.
    I0 = 1.
    R0 = 0.
    PopIn= (S0, I0, R0)
    beta= 0.50     
    gamma=1/10.  
    mu = 1/25550.
    t_end = 15000.
    t_start = 1.
    t_step = 1.
    t_interval = np.arange(t_start, t_end, t_step)

    #Solving the differential equation. Solves over t for initial conditions PopIn
    def eq_system(t,PopIn):
        '''Defining SIR System of Equations'''
        #Creating an array of equations
        Eqs= np.zeros((3))
        Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
        Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
        Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
        return Eqs

    ode =  spi.ode(eq_system)

    # BDF method suited to stiff systems of ODEs
    ode.set_integrator('vode',nsteps=500,method='bdf')
    ode.set_initial_value(PopIn,t_start)

    ts = []
    ys = []

    while ode.successful() and ode.t < t_end:
        ode.integrate(ode.t + t_step)
        ts.append(ode.t)
        ys.append(ode.y)

    t = np.vstack(ts)
    s,i,r = np.vstack(ys).T

    fig,ax = subplots(1,1)
    ax.hold(True)
    ax.plot(t,s,label='Susceptible')
    ax.plot(t,i,label='Infected')
    ax.plot(t,r,label='Recovered')
    ax.set_xlim(t_start,t_end)
    ax.set_ylim(0,100)
    ax.set_xlabel('Time')
    ax.set_ylabel('Percent')
    ax.legend(loc=0,fancybox=True)

    return t,s,i,r,fig,ax

Output:

enter image description here

다른 팁

The infected population PopIn[1] decays to zero. Apparently, (normal) numerical imprecision leads to PopIn[1] becoming negative (approx. -3.549e-12) near t=322.9. Then eventually the solution blows up near t=7818.093, with PopIn[0] going toward +infinity and PopIn[1] going toward -infinity.

Edit: I removed my earlier suggestion for a "quick fix". It was a questionable hack.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top