문제

We have a requirement of reading/writing more than 10 million strings into a file. Also we do not want duplicates in the file. Since the strings would be flushed to a file as soon as they are read we are not maintaining it in memory.

We cannot use hashcode because of collisions in the hash code due to which we might miss a string as duplicate. Two other approaches i found in my googling:

1.Use a message digest algorithm like MD5 - but it might be too costly to calculate and store.

2.Use a checksum algorithm. [i am not sure if this produces a unique key for a string- can someone please confirm]

Is there any other approach avaiable. Thanks.

도움이 되었습니까?

해결책

If you're okay with a microscopical risk of collisions, you could use some hash function such as MD5 as you suggest, and rely on the hashes.

Another alternative, possibly with a larger memory footprint, is to store the, already encountered strings, in a trie (a special type of tree).


Update: Yet another alternative, would be to use a Bloom filter. This however, still relies on hashing but can be adjusted to have an arbitrarily small probability of collisions.

다른 팁

Storing 10 million strings in memory is indeed a lot, so I understand the reason to write it to file immediately instead of storing in e.g. a TreeSet<String> first, but where would you like to store the 10 million unique numerical keys which you want to compare with? When you want to keep it unique and numerical (which has much littler base/radix than letters), you can't make the key shorter than the string itself already is, so you won't save any memory. Or maybe at highest with data compression like GZIP, but this would only add a lot of overhead. MD5 is also inappropriate since two different strings can yield the same hash.

I really see no better solution for this than using a decent RDBMS (SQL database) wherein you set the column as UNIQUE and handle the constraint violation accordingly. A RDBMS is highly optimized for this kind of tasks.

If you really can't consider a database, then you need to re-read the file for any existing entry before the write/flush. Maybe not very fast, but certainly memory efficient.

There is no way to make a function that would produce a unique key for a string, which is shorter than that string.
There are data structures which can solve your task. B-tree might fit if you data is large enough. Depending on the nature of your input, there might be more effective ways.

Reliably removing duplicates is pretty much as difficult as sorting the file. As another answer indicates, there is no guaranteed way of precisely detecting duplicates without keeping a full copy of each string in memory, which seems to be exactly what you're trying to avoid.

You could keep an in-memory or on-disk index of hashcodes, and use these to retrieve actual strings from file storage for comparison, but this would essentially duplicate what a database would be able to do for you.

An alternative is to post-process the file once it's complete. The UNIX sort command is pretty good at large files (How could the UNIX sort command sort a very large file?), so I'd expect the standard UNIX command-line approach to work reasonably:

    sort my-file-of-strings.txt | uniq > my-filtered-file-of-strings.txt

(Note that files have to be sorted first before passing to uniq to remove duplicates).

If you haven't got these tools (or equivalents) available, then you can always try implementing some variant of an external merge sort yourself.

If the strings are from a fixed pool of possible strings (N), then you can use minimal perfect hashing to create an array 0...N-1. A zero in the slot determined by the perfect hash function means the string has not been seen so far.

Otherwise, the only effectively correct means outside of a lot of memory and the solutions suggested so far is to re-read the file before deciding to write the string to it.

You could do this as efficiently as possible by memory mapping portions of the file.

I really think the best solution is - as someone else already suggested - to use a database.

If for some reason you can not use a database, you can still use a hashcode. Sure there will be collisions. Just add some code so that when you detect a duplicate hashcode, your program checks the file to determine if it is a genuine duplicate or a collision.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top