Question

I am using the string-edit distance (Levenshtein-distance) to compare scan paths from an eye tracking experiment. (Right now I am using the stringdist package in R)

Basically the letters of the strings refer to (gaze) position in a 6x4 matrix. The matrix is configured as follows:

     [,1] [,2] [,3] [,4]
[1,]  'a'  'g'  'm'  's' 
[2,]  'b'  'h'  'n'  't'
[3,]  'c'  'i'  'o'  'u'
[4,]  'd'  'j'  'p'  'v'
[5,]  'e'  'k'  'q'  'w'
[6,]  'f'  'l'  'r'  'x'

If I use the basic Levenshtein distance to compare strings, the comparison of a and g in a string gives the same estimate as comparicon of a and x.

E.g.:

'abc' compared to 'agc' -> 1
'abc' compared to 'axc' -> 1

This means that the strings are equally (dis)similar

I would like to be able to put weights on the string comparison in a way that incorporates adjacency in the matrix. E.g. the distance between a and x should be weighted as larger then that between a and g.

One way could be to calculate the "walk" (horizontal and vertial steps) from one letter to the other in the matrix and divide by the max "walk"-distance (i.e. from a to x). E.g. the "walk"-distance from a to g would be 1 and from a to x it would be 8 resulting in a weight of 1/8 and 1 respectively.

Is there a way to implement this (in either R or python)?

Was it helpful?

Solution

You need a version of the Wagner-Fisher algorithm that uses non-unit cost in its inner loop. I.e. where the usual algorithm has +1, use +del_cost(a[i]), etc. and define del_cost, ins_cost and sub_cost as functions taking one or two symbols (probably just table lookups).

OTHER TIPS

If anyone has got the same "problem", here is my solution. I made an add-on to the python implementation of the Wagner-Fischer algorithm written by Kyle Gorman.

The add-on is the weight function and implementation of it in the _dist function.

#!/usr/bin/env python
# wagnerfischer.py: Dynamic programming Levensthein distance function 
# Kyle Gorman <gormanky@ohsu.edu>
# 
# Based on:
# 
# Robert A. Wagner and Michael J. Fischer (1974). The string-to-string 
# correction problem. Journal of the ACM 21(1):168-173.
#
# The thresholding function was inspired by BSD-licensed code from 
# Babushka, a Ruby tool by Ben Hoskings and others.
# 
# Unlike many other Levenshtein distance functions out there, this works 
# on arbitrary comparable Python objects, not just strings.


try: # use numpy arrays if possible...
    from numpy import zeros
    def _zeros(*shape):
        """ like this syntax better...a la MATLAB """
        return zeros(shape)

except ImportError: # otherwise do this cute solution
    def _zeros(*shape):
        if len(shape) == 0:
            return 0
        car = shape[0]
        cdr = shape[1:]
        return [_zeros(*cdr) for i in range(car)]

def weight(A,B, weights): 
    if weights == True:
        from numpy import matrix
        from numpy import where
        # cost_weight defines the matrix structure of the AOI-placement
        cost_weight = matrix([["a","b","c","d","e","f"],["g","h","i","j","k","l"],
        ["m","n","o","p","q","r"],["s","t","u","v","w","x"]])

        max_walk = 8.00 # defined as the maximum posible distance between letters in 
                        # the cost_weight matrix

        indexA = where(cost_weight==A)
        indexB = where(cost_weight==B)

        walk = abs(indexA[0][0]-indexB[0][0])+abs(indexA[1][0]-indexB[1][0])

        w = walk/max_walk

        return w
    else:
        return 1

def _dist(A, B, insertion, deletion, substitution, weights=True):
    D = _zeros(len(A) + 1, len(B) + 1)
    for i in xrange(len(A)): 
        D[i + 1][0] = D[i][0] + deletion * weight(A[i],B[0], weights)
    for j in xrange(len(B)): 
        D[0][j + 1] = D[0][j] + insertion * weight(A[0],B[j], weights)
    for i in xrange(len(A)): # fill out middle of matrix
        for j in xrange(len(B)):
            if A[i] == B[j]:
                D[i + 1][j + 1] = D[i][j] # aka, it's free. 
            else:
                D[i + 1][j + 1] = min(D[i + 1][j] + insertion * weight(A[i],B[j], weights),
                                      D[i][j + 1] + deletion * weight(A[i],B[j], weights),
                                      D[i][j]     + substitution * weight(A[i],B[j], weights))
    return D

def _dist_thresh(A, B, thresh, insertion, deletion, substitution):
    D = _zeros(len(A) + 1, len(B) + 1)
    for i in xrange(len(A)):
        D[i + 1][0] = D[i][0] + deletion
    for j in xrange(len(B)): 
        D[0][j + 1] = D[0][j] + insertion
    for i in xrange(len(A)): # fill out middle of matrix
        for j in xrange(len(B)):
            if A[i] == B[j]:
                D[i + 1][j + 1] = D[i][j] # aka, it's free. 
            else:
                D[i + 1][j + 1] = min(D[i + 1][j] + insertion,
                                      D[i][j + 1] + deletion,
                                      D[i][j]     + substitution)
        if min(D[i + 1]) >= thresh:
            return
    return D

def _levenshtein(A, B, insertion, deletion, substitution):
    return _dist(A, B, insertion, deletion, substitution)[len(A)][len(B)]

def _levenshtein_ids(A, B, insertion, deletion, substitution):
    """
    Perform a backtrace to determine the optimal path. This was hard.
    """
    D = _dist(A, B, insertion, deletion, substitution)
    i = len(A) 
    j = len(B)
    ins_c = 0
    del_c = 0
    sub_c = 0
    while True:
        if i > 0:
            if j > 0:
                if D[i - 1][j] <= D[i][j - 1]: # if ins < del
                    if D[i - 1][j] < D[i - 1][j - 1]: # if ins < m/s
                        ins_c += 1
                    else:
                        if D[i][j] != D[i - 1][j - 1]: # if not m
                            sub_c += 1
                        j -= 1
                    i -= 1
                else:
                    if D[i][j - 1] <= D[i - 1][j - 1]: # if del < m/s
                        del_c += 1
                    else:
                        if D[i][j] != D[i - 1][j - 1]: # if not m
                            sub_c += 1
                        i -= 1
                    j -= 1
            else: # only insert
                ins_c += 1
                i -= 1
        elif j > 0: # only delete
            del_c += 1
            j -= 1
        else: 
            return (ins_c, del_c, sub_c)


def _levenshtein_thresh(A, B, thresh, insertion, deletion, substitution):
    D = _dist_thresh(A, B, thresh, insertion, deletion, substitution)
    if D != None:
        return D[len(A)][len(B)]

def levenshtein(A, B, thresh=None, insertion=1, deletion=1, substitution=1):
    """
    Compute levenshtein distance between iterables A and B
    """
    # basic checks
    if len(A) == len(B) and A == B:
        return 0       
    if len(B) > len(A):
        (A, B) = (B, A)
    if len(A) == 0:
        return len(B)
    if thresh:
        if len(A) - len(B) > thresh:
            return
        return _levenshtein_thresh(A, B, thresh, insertion, deletion,
                                                            substitution)
    else: 
        return _levenshtein(A, B, insertion, deletion, substitution)

def levenshtein_ids(A, B, insertion=1, deletion=1, substitution=1):
    """
    Compute number of insertions deletions, and substitutions for an 
    optimal alignment.
    There may be more than one, in which case we disfavor substitution.
    """
    # basic checks
    if len(A) == len(B) and A == B:
        return (0, 0, 0)
    if len(B) > len(A):
        (A, B) = (B, A)
    if len(A) == 0:
        return len(B)
    else: 
        return _levenshtein_ids(A, B, insertion, deletion, substitution)

Check out this library: https://github.com/infoscout/weighted-levenshtein (disclaimer: I am the author). It supports weighted Levenshtein distance, weighted Optimal String Alignment, and weighted Damerau-Levenshtein distance. It is written in Cython for optimal performance, and can be easily installed via pip install weighted-levenshtein. Feedback and pull requests are welcome.

Sample Usage:

import numpy as np
from weighted_levenshtein import lev


insert_costs = np.ones(128, dtype=np.float64)  # make an array of all 1's of size 128, the number of ASCII characters
insert_costs[ord('D')] = 1.5  # make inserting the character 'D' have cost 1.5 (instead of 1)

# you can just specify the insertion costs
# delete_costs and substitute_costs default to 1 for all characters if unspecified
print lev('BANANAS', 'BANDANAS', insert_costs=insert_costs)  # prints '1.5'

Another option for handling weights (Python 3.5) - with which I am not affiliated - is https://github.com/luozhouyang/python-string-similarity

pip install strsim
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top