Question

I have a database of ~150'000 words and a pattern (any single word) and I want to get all words from the database which has Damerau-Levenshtein distance between it and the pattern less than given number. I need to do it extremely fast. What algorithm could you suggest? If there's no good algorithm for Damerau-Levenshtein distance, just Levenshtin distance will be welcome as well.

Thank you for your help.

P.S. I'm not going to use SOUNDEX.

Was it helpful?

Solution

I would start with a SQL function to calculate the Levenshtein distance (in T-SQl or .Net) (yes, I'm a MS person...) with a maximum distance parameter that would cause an early exit.

This function could then be used to compare your input with each string to check the distanve and move on to the next if it breaks the threshold.

I was also thinking you could, for example, set the maximum distance to be 2, then filter all words where the length is more than 1 different whilst the first letter is different. With an index this may be slightly quicker.

You could also shortcut to bring back all strings that are perfect matches (indexing will speed this up) as these will actually take longer to calculate the Levenshtein distance of 0.

Just some thoughts....

OTHER TIPS

I do not think you can calculate this kind of function without actually enumerating all rows.
So the solutions are:

  1. Make it a very fast enumeration (but this doesn't really scale)
  2. Filter initial variants somehow (index by a letter, at least x common letters)
  3. Use alternative (indexable) algorithm, such as N-Grams (however I do not have details on result quality of ngrams versus D-L distance).

A solution off the top of my head might be to store the database in a sorted set (e.g., std::set in C++), as it seems to me that strings sorted lexicographically would compare well. To approximate the position of the given string in the set, use std::upper_bound on the string, then iterate over the set outward from the found position in both directions, computing the distance as you go, and stop when it falls below a certain threshold. I have a feeling that this solution would probably only match strings with the same start character, but if you're using the algorithm for spell-checking, then that restriction is common, or at least unsurprising.

Edit: If you're looking for an optimisation of the algorithm itself, however, this answer is irrelevant.

I have used KNIME for string fuzzy matching and has got very fast results. It is also very easy to make visual workflows in it. Just install KNIME free edition from https://www.knime.org/ then use "String Distance" and "Similarity Search" nodes to get your results. I have attached a small fuzzy matching smaple workflow in here (the input data come from top and the patterns to search for come from the bottom in this case): enter image description here

I would recommend looking into Ankiro.

I'm not certain that it meets your requirements for precision, but it is fast.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top