Question

I'm working on a problem that involves multiple database instances, each with different table structures. The problem is, between these tables, there are lots and lots of duplicates, and i need a way to efficiently find them, report them, and possibly eliminate them.

Eg. I have two tables, the first table, CustomerData with the fields:

_countId, customerFID, customerName, customerAddress, _someRandomFlags

and I have another table, CustomerData2 (built later) with the fields:

_countId, customerFID, customerFirstName, customerLocation, _someOtherRandomFlags.

Between the two tables above, I know for a fact that customerName and customerFirstName were used to store the same data, and similarly customerLocation and customerAddress were also used to store the same data.

Lets say, some of the sales team have been using customerData, and others have been using customerData2. I'd like to have a scalable way of detecting the redundancies between the tables and report them. It can be assumed with some amount of surety that customerFID in both tables are consistent, and refer to the same customer.

One solution I could think off was, to create a customerData class in python, map the records in the two tables to this class, and compute a hash/signature for the objects within the class that are required (customerName, customerLocation/Address) and store them to a signature table, which has the columns:

sourceTableName, entityType (customerData), identifyingKey (customerFID), signature 

and then for each entityType, I look for duplicate signatures for each customerFID

In reality, I'm working with huge sets of biomedical data, with lots and lots of columns. They were created at different people (and sadly with no standard nomenclature or structure) and have been duplicate data stored in them

EDIT: For simplicity sake, I can move all the database instances to a single server instance.

Was it helpful?

Solution

If I couldn't care for performance, I'd use a high-level practical approach. Use Django (or SQLAlchemy or...) to build your desired models (your tables) and fetch the data to compare. Then use an algorithm for efficiently identifying duplicates (...from lists or dicts,it depends of "how" you hold your data). To boost performance you may try to "enhance" your app with the multiprocessing module or consider a map-reduce solution.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top