Mysql master-master replication auto flush old logs
-
16-10-2019 - |
Question
I have mysql-5.1.41 master-master setup running on ubuntu machines and below is the configuration of a master:
skip-host-cache
skip-name-resolve
event_scheduler = ON
max_connections = 500
max_connect_errors = 1000
server-id = 10
replicate-same-server-id = 0
auto-increment-increment = 10
auto-increment-offset = 1
master-host = 192.168.1.106
master-user = repli
master-password = secret
master-connect-retry = 60
binlog-format = MIXED
binlog-ignore-db = information_schema
binlog-ignore-db = lb1
log-bin = /var/log/mysql/mysql-bin
log-bin-index = /var/log/mysql/bin-log.index
log-slave-updates
report-host = 192.168.1.105
replicate-ignore-db = information_schema
replicate-ignore-db = lb2
relay-log = /var/log/mysql/relay.log
relay-log-index = /var/log/mysql/relay-log.index
The log files seems to be growing everyday. Does anybody know what parameter I should include in above configuration to automatically remove/flush old unnecessasry logs with a decent value causing no damage to sync?.
Solution
To rotate out logs more than 7 days old, add this to my.cnf
[mysqld]
expire-logs-days=7
then restart mysql.
To perform this manually, run this command:
mysql> PURGE BINARY LOGS BEFORE DATE(NOW() - INTERVAL 7 DAY) + INTERVAL 0 SECOND;
This will erase binary logs back to Midnight 7 days ago.
WARNING !!!
Do not erase binary logs from the OS level because it disturbs mysqld's internal functionality for rotating binary logs.
UPDATE 2012-01-24 13:20 EDT
Make sure you do not purge binary logs that replication is using.
Here is an example how you can safely erase binary logs manually...
For a Master-Master setup M1 and M2
Go to M2 and run this:
mysql> SHOW SLAVE STATUS\G
You will see two lines that show binary log names
- Master_Log_File
- Relay_Master_Log_File
Master_Log_File
represents the current binary log on M1 that was last read on M2.
Relay_Master_Log_File
represents the current binary log on M1 that was last executed on M2.
Let's suppose that M2 has this:
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.64.176.205
Master_User: replicant
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000131
Read_Master_Log_Pos: 570079419
Relay_Log_File: relay-bin.003496
Relay_Log_Pos: 251
Relay_Master_Log_File: mysql-bin.000131
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 570079419
Relay_Log_Space: 545
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
1 row in set (0.00 sec)
mysql>
ON M2, Relay_Master_Log_File
says mysql-bin.000131
. So on M1, you can run this:
mysql> PURGE BINARY LOGS TO 'mysql-bin.000131';
Since Master_Log_File and Relay_Master_Log_File are the same, why not use Master_Log_File?
REASON #1 : SQL Error in the Slave
When an SQL Error happens in M2 on the SQL Thread, Replication stops processing SQL from the Relay Logs. The IO Thread continues downloading new SQL entries from M1 and appending to M2's Relay Logs.
REASON #2 : Long Running SQL Query in the Slave
If a query comes along, such as UPDATE
that updates every row in a table and it takes 5 minutes. During those 5 minutes, the IO Thread continues downloading new SQL entries from M1 and appending to M2's Relay Logs.
Given both reasons, there are some occassion where Relay_Master_Log_File
and Master_Log_File
are different. In that instance, always use Relay_Master_Log_File
. In light of this, there is another reason:
REASON #3 : Log Corruption in Slave or Master
There are rare times when bad network transmission will produce corrupt relay logs. When you do SHOW SLAVE STATUS\G
, an error message will explain the the binary log may be corrupt. You should start firsdt with the relay logs.
To clear them out and start with fresh set, do SHOW SLAVE STATUS\G
, get the Relay_Master_Log_File
(RMLF), the Exec_Master_Log_Pos
(EMLP), and run this on M2
STOP SLAVE;
CHANGE MASTER TO master_log_file='RMLF',master_log_pos=EMLP;
START SLAVE;
The CHANGE MASTER TO
command erases all relay logs and starts downloading from scratch.
Once you START SLAVE;
, if you get the corrupt error log message again, then network transmission was not the issue. The binary log on M1 may just be corrupt after all.
You must go to M1, make copy of the the suspected log, run mysqlbinlog
against that copied binary log and redirect to a text file. Read the text file. If the text files contains gibberish or indiscernable characters, then you must perform a full sync of the slave.