Pregunta

We are using MySQL 5.5 InnoDB Engine for managing our database, one of the table which has equally SELECT/INSERT operations over it will be having 100-150 million Insertion operations on a daily basis. I have already read about MySQL Partitioning, and was planning to implement but before I implement I'd love to take thoughts. So What is the best way to deal with this kind of challenge without compromising user's response time?

¿Fue útil?

Solución

First of all, make sure the primary key is auto-increment, as it's clustering index for InnoDB tables. This means that if it's auto-increment, the insertion is append-only operation, if not - it's random write, and this is major performance killer. Make sure the PK is small and you don't have unnecessary indexes. If possible, batch inserts, as updating the indexes is large part of the insert operation.

Make sure other I/O settings make sense, like how often the data is actually flushed to the disk; you can put the binary log file on an SSD to ensure it's written as fast as possible.

After all of this; it's common to separate reads from writes with a master-slave servers, so spikes in insert queries do not affect reading of the data (assuming it's ok to read potentially stale data)

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top