Question

I've have 2 servers running a application and the server that runs the mysql sgbd is with the load average around 3 times more than the webserver. I think that is something wrong because i have a lot of resources (RAM) left on the server. The load average is about 3~4 and the server is increasing it's response time. Here is the output of mysqltuner.pl:

[--] Up for: 9h 10m 55s (42M q [1K qps], 4M conn, TX: 19B, RX: 6B)
[--] Reads / Writes: 94% / 6%
[--] Total buffers: 10.2G global + 6.6M per thread (800 max threads)
[OK] Maximum possible memory usage: 15.4G (49% of installed RAM)
[OK] Slow queries: 0% (5K/42M)
[OK] Highest usage of available connections: 36% (294/800)
[OK] Key buffer size / total MyISAM indexes: 100.0M/34.8M
[!!] Key buffer hit rate: 73.3% (45K cached / 12K reads)
[!!] Query cache is disabled
[OK] Sorts requiring temporary tables: 0% (2K temp sorts / 3M sorts)
[OK] Temporary tables created on disk: 0% (54 on disk / 52K total)
[OK] Thread cache hit rate: 99% (478 created / 4M connections)
[!!] Table cache hit rate: 0% (400 open / 78K opened)
[OK] Open file limit used: 0% (3/4K)
[OK] Table locks acquired immediately: 100% (37M immediate / 37M locks)
[OK] InnoDB buffer pool / data size: 10.0G/1.0G
[OK] InnoDB log waits: 0

Server Configuration (note: this server runs ONLY the mysql server):

  • Intel Xeon E5-2620v3 (6 cores - 12 Threads)
  • 2x 16GB DDR4-2133 ECC (Total: 32GB)
  • 2x 256 GB SSD 2.5" (RAID1)

Based on this Server Configuration, i'm wondering if my my.cnf is with the best options and i want know if there is any sugestions to update this based on the Server Configuation and mysqltuner.pl output. Here's my my.cnf:

skip-external-locking
skip-name-resolve
lower_case_table_names          = 1

wait_timeout                    = 4
interactive_timeout             = 15

key_buffer_size                 = 100M
join_buffer_size                = 4M

max_allowed_packet              = 1M
thread_stack                    = 256K
thread_cache_size               = 250

myisam-recover                  = BACKUP
max_connections                 = 800

#Qcache not enabled due to high tax of inserts/updates
query_cache_limit               = 1M
query_cache_type                = 0
query_cache_size                = 0

log_error = /var/log/mysql/error.log

log_slow_queries                = /var/log/mysql/mysql-slow.log
long_query_time                 = 1

expire_logs_days                = 10
max_binlog_size                 = 100M

join_buffer_size                = 4M

tmp_table_size                  = 30M
max_heap_table_size             = 30M

#InnoDB config's
innodb_commit_concurrency       = 0
innodb_io_capacity              = 90000
innodb                          = ON
innodb_flush_method             = O_DIRECT
innodb_file_per_table           = 1
innodb_flush_log_at_trx_commit  = 2
innodb_doublewrite              = 1
innodb_additional_mem_pool_size = 64M
innodb_thread_concurrency       = 12
innodb_log_file_size            = 512M
innodb_log_buffer_size          = 24M
innodb_read_io_threads          = 12
innodb_buffer_pool_size         = 10G
innodb_write_io_threads         = 12
innodb_log_files_in_group       = 2

Note: I have hide some unuseful parameters of my my.cnf.

UPDATE

I'm using Mysql Server 5.5 and i've now increase my table_open_cache (it has it's default value of 64 in the moment of the post). Now, mysql tune shows the following line when talk about table cache:

[OK] Table cache hit rate: 99% (1K open / 1K opened)

About the ENGINE point, i have 87 tables and only 11 of them are MyISAM (12% of the entire database + those tables are "Select/Insert" tables), the other ones are all InnoDB (with a hight tax of select/insert/update). I don't have ANY DELETE clause on my system, instead i set a flag to a row saying that it was deleted (so i can delete it in a non-busy time). I've a lot of queries (especially queries like Select count(*)) tooking more than 1.5, 2 seconds to execute.

About innodb_io_capacity, the server disks are all SSD that has 100.000IOPS.

Was it helpful?

Solution

Are you using MyISAM? Or InnoDB? Or both?

[!!] Query cache is disabled

This is "good", not "bad".

[!!] Table cache hit rate: 0% (400 open / 78K opened)

How many tables to you have? Thousands? Please elaborate on why so many. (This is usually a design flaw.) Meanwhile, increase table_open_cache to, say, 2000. What MySQL version are you running? Set table_open_cache_instances = 8 if it is a new enough version.

[OK] Highest usage of available connections: 36% (294/800)

Yeah that is "OK" in one sense. But in another sense, 294 simultaneous connections is a lot. Please elaborate. Perhaps they are failing to disconnect?

[--] Up for: 9h
[OK] Slow queries: 0% (5K/42M)
load average is about 3~4
long_query_time = 1

Those sort of contradict each other. "Load average", in my opinion, is not very useful. Do you have a measure of CPU utilization? Still, if CPU is high, why aren't there more queries taking more than 1 second? Anyway, please run pt-query-digest and let's see the first couple of queries it finds. We may be able to speed them up; that may speed up the system.

Update

Thanks for the update.

SELECT COUNT(*) on InnoDB tables is proportional to the size of the table, so it will get slower and slower. The slowlog will point out which of them are run often enough to be a nuisance.

Between the SSDs and sufficient caching (key_buffer, buffer_pool, table_open_cache) plus innodb_flush_log_at_trx_commit = 2, you have probably minimized the I/O. Now to focus on slowlog.

OTHER TIPS

Rick James covered it quite well, I just want to add one thing:

[!!] Query cache is disabled
[--] Reads / Writes: 94% / 6% - and [1K qps]

for me it seems there might be some potential in enabling query cache - but it depends a lot on your queries - if lot of queries contain some dynamic conditions (datetime/timestamp for taking most recent something) or similar stuff, then probably don't bother. But in case lot of those read queries are the same for each connection, then I would give it a try - you can enable QC without server restart for testing purposes - just give it say 100MB and enable it for some time - then compare load with and without, responsivity etc, and you will see if it makes a difference - because of some scalability issues QC is known for, it can actually make things worse, so monitor it during the test, but we use it on production server with small but measurable positive effect.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top