Question

How to optimize a linux MySQL server for faster queries for tables with many rows (up to 1 bilion).

The server is only used in development so security, availability or recovery is not an issue.

here is an example :

ALTER TABLE link ADD UNIQUE KEY url_index(url(255))
  • duration >6h
  • links count (~ 20 mil)
  • low cpu usage (~19%)
  • high IO (iotop mysql returned mostly 600 KB/s read & 500 KB/s write with 99% IO with some spikes to 2-3 MB/s )

links table:

CREATE TABLE `links` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `url` varchar(2000) NOT NULL,
  `domain` varchar(255) DEFAULT NULL  
  PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=27765646 DEFAULT CHARSET=utf8;

I have a dedicated linux server (ubuntu 12.10)

  • 6 GB of RAM
  • 1 TB HDD (non-SSD)

my.cnf (http://pastebin.com/vx6BNqrE)

What can I optimize in terms of software (query/table design and my.cnf or other server configs) and what should be the first to upgrade in terms of hardware other than RAM (unfortunately RAM is not upgradeable as only 3 ram slots are usable).

updatead

Was it helpful?

Solution

Rather than playing around doing micro-optimizations that might give you a few more transactions an hour, you need to start scaling.

Firstly, get an SSD. If you need more space, RAID them. If you really care about performance, SSD's are huge, especially looking at your IO.

Secondly, one server can only go so far. You need to start setting up slaves that will take off some of the load, they also serve as a nifty fail over.

MySQL wasn't really made for on-demand schema changes, so the alter table is hard to answer; I would actually take a server out of service to prevent locking, run the alter table, then sync it back up with what it missed and follow suite with the other servers. You should really just pick one schema and stick with it though.

Best of luck!

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top