Question

I'm trying to convert some code to Python but I noticed that SciPy's sparse diagonal operations are having some trouble handling systems that are diagonal.

For example the following code can be written as a per-pixel convolution, which in my C++ implementation is really fast. With overlap, it is approximately the memory access time. I would expect Python to know this because the system is diagonal.

When I try to run it in Python my terminal hogs the system resources and nearly killed my system. For example, the Matlab version goes much faster.

import numpy as np
from scipy import sparse
print(np.version.version)
color=imread('lena.png')
gray=mean(color,2)
[h,w]=gray.shape
img=gray.flatten()
ne=h*w;
L=np.ones(ne);
M=.5*np.ones(ne);
R=np.ones(ne);
Diags=[L,M,R]
mtx = sparse.spdiags(Diags, [-1,0,1], ne, ne);
#blured=np.dot(mtx,img) Dies

I understand I could rewrite it as 'for' loop going through the pixels, but is there a way I can keep a sparse data structure while keeping performance?

Was it helpful?

Solution

Use mtx.dot instead of np.dot as

blured = mtx.dot(img)

or just

blured = mtx * img # where mtx is sparse and img is `dense` or `sparse` 

Two parameters of np.dot is dealt with dense even though one of them is sparse. So, this will raise MemoryError

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top