Question

I've implemented a solution to the following system of equations

dy/dt = -t*y(t) - x(t)
dx/dt = 2*x(t) - y(t)^3

y(0) = x(0) = 1.
0 <= t <= 20

firstly in Mathematica and afterwards in Python.

My code in Mathematica:

s = NDSolve[
{x'[t] == -t*y[t] - x[t], y'[t] == 2 x[t] - y[t]^3, x[0] == y[0] == 1},
{x, y}, {t, 20}]

ParametricPlot[Evaluate[{x[t], y[t]} /. s], {t, 0, 20}]

From that I get the following plot: Plot1 (if it gives a 403 Forbidden message please press enter inside the url field)

Later on I coded the same into python:

import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt

g = lambda t: t

def f(z,t):
    xi = z[0]
    yi = z[1]
    gi = z[2]

    f1 = -gi*yi-xi
    f2 = 2*xi-yi**3
    return [f1,f2]

# Initial Conditions
x0 = 1.
y0 = 1.
g0 = g(0)
z0 = [x0,y0,g0]
t= np.linspace(0,20.,1000)

# Solve the ODEs
soln = odeint(f,z0,t)
x = soln[:,0]
y = soln[:,1]

plt.plot(x,y)
plt.show()

And this is the plot I get: Plot2 (if it gives a 403 Forbidden message please press enter inside the url field)

If one plots again the Mathematica solution in a smaller field:

ParametricPlot[Evaluate[{x[t], y[t]} /. s], {t, 0, 6}]

he will get a similar result to the python solution. Only the axis' will be misplaced.

Why is there such a big difference in the plots? What am I doing wrong?

I suspect that my python implementation of the model is wrong, especially where f1 is calculated. Or maybe the plot() function isn't handy at all for plotting parametric equations as in this case.

Thanks.

ps: sorry for making your life hard by not slapping the images inside the text; I don't have enough reputation yet.

Was it helpful?

Solution

You're using t as your third parameter in the input vector, not as a separate parameter. The t in f(z,t) is never used; instead, you use z[2], which will not equal the range of t as you define before (t=np.linspace(0,20.,1000)). The lambda function for g won't help here: you only use it once to set a t0, but never after.

Simplify your code, and remove that third parameter from your input vector (as well as the lambda function). For example:

import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt

def f(z,t):
    xi = z[0]
    yi = z[1]

    f1 = -t*yi-xi
    f2 = 2*xi-yi**3
    return [f1,f2]

# Initial Conditions
x0 = 1.
y0 = 1.
#t= np.linspace(0,20.,1000)
t = np.linspace(0, 10., 100)

# Solve the ODEs
soln = odeint(f,[x0,y0],t)
x = soln[:,0]
y = soln[:,1]

ax = plt.axes()
#plt.plot(x,y)
plt.plot(t,x)
# Put those axes at their 0 value position
ax.spines['left'].set_position('zero')
ax.spines['bottom'].set_position('zero')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
#plt.axis([-0.085, 0.085, -0.05, 0.07])
plt.show()

I have commented out the actual plot you want, and instead I'm plotting x versus t, what you have in the comments, since I feel that makes it easier to see things are correct now. The figure I get:

enter image description here

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top