I'm trying to convert some code to Python but I noticed that SciPy's sparse diagonal operations are having some trouble handling systems that are diagonal.

For example the following code can be written as a per-pixel convolution, which in my C++ implementation is really fast. With overlap, it is approximately the memory access time. I would expect Python to know this because the system is diagonal.

When I try to run it in Python my terminal hogs the system resources and nearly killed my system. For example, the Matlab version goes much faster.

import numpy as np
from scipy import sparse
print(np.version.version)
color=imread('lena.png')
gray=mean(color,2)
[h,w]=gray.shape
img=gray.flatten()
ne=h*w;
L=np.ones(ne);
M=.5*np.ones(ne);
R=np.ones(ne);
Diags=[L,M,R]
mtx = sparse.spdiags(Diags, [-1,0,1], ne, ne);
#blured=np.dot(mtx,img) Dies

I understand I could rewrite it as 'for' loop going through the pixels, but is there a way I can keep a sparse data structure while keeping performance?

有帮助吗?

解决方案

Use mtx.dot instead of np.dot as

blured = mtx.dot(img)

or just

blured = mtx * img # where mtx is sparse and img is `dense` or `sparse` 

Two parameters of np.dot is dealt with dense even though one of them is sparse. So, this will raise MemoryError

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top