Identify slow query without slow query logs in mysql server
-
26-09-2020 - |
Question
I am wondering is there any other way to to check our slow queries without logging slow query. Suppose, I have a highily busy server can't afford to log much to save memory and I/Os. Then, is there any other way available to check if I have a slow query? I know, we can do profiling of the query but still not sure what exactly to do to identify which query is the one taking most of the time and memory.
Just started mysql administration and not sure how to handle this. Any guidance will be highly appreciated.
Solution
If you do not want to enable the slow query log at all, I have a suggestion
You can use pt-query-digest over an interval of time.
I have suggested this a few times in the DBA StackExchange
Nov 24, 2011
: MySQL general query log performance effectsApr 24, 2012
: Investigate peak in MySQL throughputJul 26, 2012
: What is running right now?
If you look at my Nov 24, 2011
link, I provided a shell script you can crontab to launch pt-query-digest.
GIVE IT A TRY !!!
OTHER TIPS
You can run the following statement in a loop in a script that triggers the statement every 10 seconds for example.
mysql -e 'SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST where time>10 and command<>"Sleep"'
You can customize it to give you more or less info depending on the query you issue.
mysql> desc INFORMATION_SCHEMA.PROCESSLIST;
+---------------+---------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------------+---------------------+------+-----+---------+-------+
| ID | bigint(21) unsigned | NO | | 0 | |
| USER | varchar(16) | NO | | | |
| HOST | varchar(64) | NO | | | |
| DB | varchar(64) | YES | | NULL | |
| COMMAND | varchar(16) | NO | | | |
| TIME | int(7) | NO | | 0 | |
| STATE | varchar(64) | YES | | NULL | |
| INFO | longtext | YES | | NULL | |
| TIME_MS | bigint(21) | NO | | 0 | |
| ROWS_SENT | bigint(21) unsigned | NO | | 0 | |
| ROWS_EXAMINED | bigint(21) unsigned | NO | | 0 | |
+---------------+---------------------+------+-----+---------+-------+
11 rows in set (0.00 sec)
In order not to save the same query many times, you may use the hash of the query as a unique key.