Domanda

Evening,

I'm going through the long process of importing data from a battered, 15-year-old, read-only data format into MySQL to build some smaller statistical tables from it.

The largest table I have built before was (I think) 32 million rows, but I didn't expect it to get that big and was really straining MySQL.

The table will look like this:

surname    name    year    rel    bco    bplace    rco    rplace
Jones      David   1812    head   Lond   Soho      Shop   Shewsbury

So, small ints and varchars.

Could anyone offer advice on how to get this to workas quickly as possible? Would indexes on any of the coulmns help, or would they just slow queries down.

Much of the data in each column will be duplicated many times. Some fields don't have much more than about 100 different possible values.

The main columns I will be querying the table on are: surname, name, rco, rplace.

È stato utile?

Soluzione

INDEX on a column fastens the search. Try to INDEX columns that you would be using more often in queries. As you have mentioned you would be using the columns surname, name, rco, rplace. I'd suggest you index them.

Since the table has 32 million records, indexing would take sometime however it is worth the wait.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top