Question

It's my first time using RDS on AWS, I use a t2.medium instance running MySQL Aurora with default configs. The CPU usage and DB connections is quite normal until "something" happens, which causes the DB connections go all the way up to its maximum (with t2.medium it is 80 connections).

enter image description here

I only have one web app, running on an EC2 instance. When the DB connections reaches its maximum, the EC2 instance's CPU usage is absolutely normal (25-30%), but all attempts to connect to the DB instance results in "Too many connections".

enter image description here

I also checked the DB instance's CPU Utilization at the time -- it was showing no signal of high load. During the strike time, CPU Utilization dropped to 20% and kept that rate consistently.

enter image description here

The thing I don't understand: DB connections is at its maximum, but why is DB CPU Utilization dropped? Shouldn't it be at its maximum as well because of the computation for the queries in those connections?

Please help me understand, thank you very much.

(I had to resize the RDS instance to r4.large when the second strike happens; I'm still running it now until I find out the problem...)

Était-ce utile?

La solution

If you run show processlist; it'll show you all the connections that are running against your DB. Running show status like 'Conn%' can show how many connections are active

You may think that a high number of connections would increase the CPU but ultimately it depends what those connections are trying to accomplish, if they aren't doing much but are preventing you're main application from running their connections then the normal processing level will go down while you are not running your normal processes

The annoying thing is to be able to run the show processlist command you need to actually be connected to the DB which you wont be able to as you're at max connections.

A slight work around is to increase your max connections (if you're able to) and set up some monitoring somewhere that runs every so often (every 5 minutes would have caught that) and when you're over 50 connections, run a command that dumps out all the active connections somewhere so you can review later.

Autres conseils

In your RDS, first check all IOPS metrics.

RDS provide a certain quantity of IOPS, and if your RDS Instance have 10GB SSD, probably your IOPS quantity is 100 IOPS (default), this value increase dependends of the instance SSD size.

If you have not IOPS enough, your RDS is blocked for IO operations, causing 'Sending Data' in your MySQL and high timeouts, but no change CPU usage or Memory, because this is an Disk issue.

For fix it, change the configurations of the MySQL, in Dashboard you change it.

I have MySQL on EC2 instance, low cost for me.

See my.cnf

[mysqld]


port                           = 3306
user                           = mysql
default-storage-engine         = InnoDB
socket                         = /var/lib/mysql/mysql.sock
pid-file                       = /var/lib/mysql/mysql.pid


log-error                      = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes  = 1
slow-query-log                 = 0
slow-query-log-file            = /var/lib/mysql/mysql-slow.log
log_error_verbosity            = 2


max-allowed-packet              = 6M
skip-name-resolve
sql-mode                        = STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ONLY_FULL_GROUP_BY
sysdate-is-now                  = 1



datadir                         = /var/lib/mysql


key-buffer-size                 = 128M
query_cache_size                = 128M #100M384M

tmp_table_size                  = 256M
max_heap_table_size             = 256M

innodb-buffer-pool-size         = 2G 
innodb_log_buffer_size          = 8M
innodb_log_file_size            = 1G


wait_timeout                    = 10
interactive_timeout             = 300


max-connect-errors              = 100000
max-connections                 = 200


sort_buffer_size                = 4M
read_buffer_size                = 2M
read_rnd_buffer_size            = 2M
join_buffer_size                = 2M
thread_stack                    = 4M
thread-cache-size               = 80


performance_schema              = on


query_cache_type                = 1 #0 #1
query_cache_limit               = 128M


table_open_cache                = 2680
open-files-limit                = 1024000
table-definition-cache          = 3024



# IOPS OPTIMIZATION #

innodb-flush-method            = O_DIRECT
innodb-log-files-in-group      = 2
innodb-flush-log-at-trx-commit = 2

innodb_buffer_pool_instances   = 1
innodb_stats_on_metadata       = 0

innodb_io_capacity             = 100
innodb_use_native_aio          = 1
innodb-file-per-table          = 0

explicit_defaults_for_timestamp = 1

[This conf is for 2 vcpu and 4GB RAM, 10GB SSD with 100IOPS] Work fine for me in AWS NLB + Auto Scaling Service.

Other optimizations is would be increased the innodb buffer to 50% of memory of the instance, and innodb_log_file_size with 50% of the innodb-buffer-pool-size value.

Read this: https://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-diskio.html

I hope help you or other users with same problem. =)

Licencié sous: CC-BY-SA avec attribution
Non affilié à dba.stackexchange
scroll top