Frage

We have a 3 node MariaDB Galera cluster.

All nodes are identical and the config on each is identical. They have 64GB of RAM each.

The output of the following query is 28:

SELECT CEILING(Total_InnoDB_Bytes*1.6/POWER(1024,3)) RIBPS FROM
(SELECT SUM(data_length+index_length) Total_InnoDB_Bytes
FROM information_schema.tables WHERE engine='InnoDB') A;

Our database is growing quite quickly and the machines are dedicated so innodb_buffer_pool_size is now set to 56G. Originally it was set to 256M, we just increased it this morning (the cluster was only set up a couple of days ago).

My question is why is the amount of RAM used on each machine so drasticly different, see the graph below. When we changed innodb_buffer_pool_size to 56G (and did a rolling-restart) they all jumped up by around the same amount (in real terms, not proportionally), but as you can see are clearly still quite different. Can anybody shed some light on why?

Thanks!

Physical Memory Used in DB Machines

Keine korrekte Lösung

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit dba.stackexchange
scroll top