MySQL slow log configuration
-
22-10-2019 - |
Question
Is there a way of making MySQL slow logs to start new log file every day ? At the moment its just a single large file, and have to grep lines for every day. It would be much more convenient to have separate files for every days slow logs.
Do I have to configure a my.cnf or some linux feature ?
Solution
Everyone is used to this one, the good old text file.
Just run the following to flush a slow log everyday
STEP 01) Turn off the slow query log
SET GLOBAL slow_query_log = 'OFF';
STEP 02) Copy the text file
cat slow-query.log | gzip > /logs/slow-query-`date +"%Y%m%d-%H%M"`.log.gz
STEP 03) Truncate the file to zero bytes
echo -n > slow-query.log
STEP 04) Turn on the slow query log
SET GLOBAL slow_query_log = 'ON';
You could switch to log-output=TABLE
and deal with it as a Table to Query.
STEP 01) Convert mysql.slow_log from CSV to MyISAM
ALTER TABLE mysql.slow_log ENGINE = MyISAM;
STEP 02) Index the table
ALTER TABLE mysql.slow_log ADD INDEX (start_time);
STEP 03) Activate log format to be TABLE
[mysqld]
log-output=TABLE
STEP 04) service mysql restart
Once you startup mysqld, the slow log entries are recorded in the MyISAM table mysql.slow_log;
To rotate out the entries before midnight, you could something like this:
SET GLOBAL slow_query_log = 'OFF';
SET @dt = NOW();
SET @dtstamp = DATE_FORMAT(@dt,'%Y%m%d_%H%i%S');
SET @midnight = DATE(@dt) + INTERVAL 0 SECOND;
ALTER TABLE mysql.slow_log RENAME mysql.slow_log_old;
CREATE TABLE mysql.slow_log LIKE mysql.slow_log_old;
INSERT INTO mysql.slow_log SELECT * FROM mysql.slow_log_old WHERE start_time >= @midnight;
DELETE FROM mysql.slow_log_old WHERE start_time >= @midnight;
SET @sql = CONCAT('ALTER TABLE mysql.slow_log_old RENAME mysql.slow_log_',@dtstamp);
PREPARE stmt FROM @sql; EXECUTE stmt; DEALLOCATE PREPARE stmt;
SET GLOBAL slow_query_log = 'ON';
and that's all for slow logs...
OTHER TIPS
Update
As Aaron points out, there is the chance the copy-and-truncate can miss some entries. So the safer method is to move and FLUSH
.
Original
This article has the basic principle to rotating the slow query log that I use. Basically you need to copy the slow log to a new file, then truncate the contents of the slow.log:
cp log/slow.log log/slow.log.`date +%M`; > log/slow.log
If you just move the slow log to a new file and creating a new 'slow.log', it won't work because the moved file still has the same inode, and mysql still has it open. I suppose moving the file and then issuing a FLUSH SLOW LOGS
command would work, as that closes the file and reopens, but I find the copy-and-truncate to be just as effective and doesn't require logging into mysql.
His article mentions using logrotate in Linux, but I just made a cronjob to run once a day at midnight to do this for me.
Also, to address the issue of replication on FLUSH LOGS
:
FLUSH LOGS, FLUSH MASTER, FLUSH SLAVE, and FLUSH TABLES WITH READ LOCK (with or without a table list) are not written to the binary log in any case because they would cause problems if replicated to a slave. [src]
So no, since those statements are not written to the binary log, it will not interfere with replication. For your purposes I would specify FLUSH SLOW LOGS
to only close/open the slow query log.
use Logrotate.d to daily rotate the files and keep as many days as you want or move them off...then issue a flush-logs from the same script to get MySQL to start a new file....having that in log rotate, set to daily should get you what you want..
I am hoping someday they implement something similar to 'expire_log_days' for debugging logs like genlog and slow log