Pregunta

I have an array of doubles, roughly 200,000 rows by 100 columns, and I'm looking for a fast algorithm to find the rows that contain sequences most similar to a given pattern (the pattern can be anywhere from 10 to 100 elements). I'm using python, so the brute force method (code below: looping over each row and starting column index, and computing the Euclidean distance at each point) takes around three minutes.

The numpy.correlate function promises to solve this problem much faster (running over the same dataset in less than 20 seconds). However, it simply computes a sliding dot product of the pattern over the full row, meaning that to compare similarity I'd have to normalize the results first. Normalizing the cross-correlation requires computing the standard deviation of each slice of the data, which instantly negates the speed improvement of using numpy.correlate in the first place.

Is it possible to compute normalized cross-correlation quickly in python? Or will I have to resort to coding the brute force method in C?

def norm_corr(x,y,mode='valid'):
    ya=np.array(y)
    slices=[x[pos:pos+len(y)] for pos in range(len(x)-len(y)+1)]
    return [np.linalg.norm(np.array(z)-ya) for z in slices]

similarities=[norm_corr(arr,pointarray) for arr in arraytable]
¿Fue útil?

Solución

If your data is in a 2D Numpy array, you can take a 2D slice from it (200000 rows by len(pattern) columns) and compute the norm for all the rows at once. Then slide the window to the right in a for loop.

ROWS = 200000
COLS = 100
PATLEN = 20
#random data for example's sake
a = np.random.rand(ROWS,COLS)
pattern = np.random.rand(PATLEN)

tmp = np.empty([ROWS, COLS-PATLEN])
for i in xrange(COLS-PATLEN):
    window = a[:,i:i+PATLEN]
    tmp[:,i] = np.sum((window-pattern)**2, axis=1)

result = np.sqrt(tmp)
Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top