@Ophion comment explains very well what's going on. The call to array()
within f(x)
introduces some overhead, that kills the benefit of the use of matrix multiplication in the expression h * f(x[step])
.
And as he says, you may be interested in having a look at scipy.integrate
for a nice set of numerical integrators.
To solve the problem at hand of vectorising your code, you want to avoid recreating the array every time you call f
. You would like to initialize the array once, and return it modified at every call. This is similar to what a static
variable is in C/C++.
You can achieve this with a mutable default argument, that is interpreted once, at the time of the definition of the function f(x)
, and that has local scope. Since it has to be mutable, you encapsulate it in a list of a single element:
def f(x,static_tmp=[empty((2,2))]):
static_tmp[0][0]=x[1]
static_tmp[0][1]=-x[0] - a * x[1] * (x[0] ** 2 + x[1] ** 2 - 1)
return static_tmp[0]
With this modification to your code, the overhead of array creation disappears, and on my machine I gain a small improvement:
%timeit forward_euler() #258ms
%timeit forward_euler_vector() #248ms
This means that the gain of optimizing matrix multiplication with numpy is quite small, at least on the problem at hand.
You may want to get rid of the function f
straight away as well, doing its operations within the for loop, getting rid of the call overhead. This trick of the default argument can however be applied also with scipy
more general time integrators, where you must provide a function f
.
EDIT: as pointed out by Jaime, another way to go is to treat static_tmp
as an attribute of the function f
, and to create it after having declared the function but before calling it:
def f(x):
f.static_tmp[0]=x[1]
f.static_tmp[1]=-x[0] - a * x[1] * (x[0] ** 2 + x[1] ** 2 - 1)
return f.static_tmp
f.static_tmp=empty((2,2))