For Amazon RDS, there is a metric called "Avg Replica Lag" in Cloudwatch. The meaning is quite clear. It stands for average replication delay between the master to slaves.

However, I'm not sure the mechanism of this metric, i.e. how does Amazon RDS detect such a lag? Taking MySQL database as an example, does Amazon RDS have some specific methods to detect the delay? Or it simply uses the results of "second behind master" reported by MySQL?

有帮助吗?

解决方案

Cloudwatch's Replica Lag is the average of the results returned by "Seconds Behind Master" when you perform a 'SHOW SLAVE STATUS' query on all of your slaves.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top