Question

What is the expected production requirements for memory / cpu for maxscale

We have a server with 4 GB of memory configured, to run maxscale as a query r / w router, with a replication manager running on the same server.

I have found on doing a large number of inserts in a single transaction, in the millions of rows. 5G File with 10 Million rows Using LOAD data infile. Running this same load of data infile against the backend server directly works with out any issues.

This is the max_allowed_packet on the backend server.

MariaDB [(none)]> SHOW GLOBAL VARIABLES LIKE 'max_allowed_packet'\G
*************************** 1. row ***************************
Variable_name: max_allowed_packet
        Value: 16777216

The backend server has 32 GB of ram, with no services currently useing it as we are still testing it out and tuneing configuration.

The max scale server runs out of memory. This large set of inserts causes maxscale to crash.

I see the following errors on my client side.

ERROR 2013 (HY000) at line 1: Lost connection to MySQL server during query  
ERROR 2013 (HY000) at line 1: Lost connection to MySQL server at 'reading initial communication packet', system error: 111  
ERROR 2013 (HY000) at line 1: Lost connection to MySQL server at 'reading initial communication packet', system error: 111  

And on the server side I see the following in the max scale logs.

2017-10-16 19:06:32   notice : Started MaxScale log flusher.
2017-10-16 19:06:32   notice : MaxScale started with 7 server threads.
2017-10-16 19:15:14   notice : Waiting for housekeeper to shut down.
2017-10-16 19:15:15   notice : Finished MaxScale log flusher.
2017-10-16 19:15:15   notice : Housekeeper shutting down.
2017-10-16 19:15:15   notice : Housekeeper has shut down.
2017-10-16 19:15:15   notice : MaxScale received signal SIGTERM. Exiting.
2017-10-16 19:15:15   notice : MaxScale is shutting down.
2017-10-16 19:15:15   notice : MaxScale shutdown completed.
2017-10-16 19:15:15   MariaDB MaxScale is shut down.
----------------------------------------------------


MariaDB MaxScale  /var/log/maxscale/maxscale.log  Mon Oct 16 19:15:17 2017
----------------------------------------------------------------------------
2017-10-16 19:15:17   notice : Working directory: /var/log/maxscale
2017-10-16 19:15:17   notice : MariaDB MaxScale 2.1.5 started
2017-10-16 19:15:17   notice : MaxScale is running in process 21067
2017-10-16 19:15:17   notice : Configuration file: /etc/maxscale.cnf
2017-10-16 19:15:17   notice : Log directory: /var/log/maxscale
2017-10-16 19:15:17   notice : Data directory: /var/cache/maxscale
2017-10-16 19:15:17   notice : Module directory: /usr/lib64/maxscale
2017-10-16 19:15:17   notice : Service cache: /var/cache/maxscale
2017-10-16 19:15:17   notice : Loading /etc/maxscale.cnf.
2017-10-16 19:15:17   notice : /etc/maxscale.cnf.d does not exist, not reading.
2017-10-16 19:15:17   notice : Loaded module ccrfilter: V1.1.0 from /usr/lib64/maxscale/libccrfilter.so
2017-10-16 19:15:17   notice : [cli] Initialise CLI router module
2017-10-16 19:15:17   notice : Loaded module cli: V1.0.0 from /usr/lib64/maxscale/libcli.so
2017-10-16 19:15:17   notice : [readwritesplit] Initializing statement-based read/write split router module.
2017-10-16 19:15:17   notice : Loaded module readwritesplit: V1.1.0 from /usr/lib64/maxscale/libreadwritesplit.so
2017-10-16 19:15:17   notice : [mysqlmon] Initialise the MySQL Monitor module.
2017-10-16 19:15:17   notice : Loaded module mysqlmon: V1.5.0 from /usr/lib64/maxscale/libmysqlmon.so
2017-10-16 19:15:17   notice : Loaded module MySQLBackend: V2.0.0 from /usr/lib64/maxscale/libMySQLBackend.so
2017-10-16 19:15:17   notice : Loaded module MySQLBackendAuth: V1.0.0 from /usr/lib64/maxscale/libMySQLBackendAuth.so
2017-10-16 19:15:17   notice : Loaded module maxscaled: V2.0.0 from /usr/lib64/maxscale/libmaxscaled.so
2017-10-16 19:15:17   notice : Loaded module MaxAdminAuth: V2.1.0 from /usr/lib64/maxscale/libMaxAdminAuth.so
2017-10-16 19:15:17   notice : Loaded module MySQLClient: V1.1.0 from /usr/lib64/maxscale/libMySQLClient.so
2017-10-16 19:15:17   notice : Loaded module MySQLAuth: V1.1.0 from /usr/lib64/maxscale/libMySQLAuth.so
2017-10-16 19:15:17   notice : No query classifier specified, using default 'qc_sqlite'.
2017-10-16 19:15:17   notice : Loaded module qc_sqlite: V1.0.0 from /usr/lib64/maxscale/libqc_sqlite.so
2017-10-16 19:15:17   notice : Encrypted password file /var/cache/maxscale/.secrets can't be accessed (No such file or directory). Password encryption is not used.
2017-10-16 19:15:17   notice : [MySQLAuth] [Read-Write_Service] Loaded 227 MySQL users for listener Read-Write_Listener.
2017-10-16 19:15:17   notice : Listening for connections at [10.56.229.60]:3306 with protocol MySQL
2017-10-16 19:15:17   notice : Listening for connections at [::]:6603 with protocol MaxScale Admin
2017-10-16 19:15:17   notice : Started MaxScale log flusher.
2017-10-16 19:15:17   notice : MaxScale started with 7 server threads.
2017-10-16 19:15:17   notice : Server changed state: tmsdb-isa-01[10.56.228.64:3306]: new_master. [Running] -> [Master, Running]
2017-10-16 19:15:17   notice : Server changed state: tmsdb-isa-02[10.56.228.65:3306]: new_slave. [Running] -> [Slave, Running]
2017-10-16 19:15:17   notice : Server changed state: tmsdb-rp-01[10.21.228.65:3306]: new_slave. [Running] -> [Slave, Running]
2017-10-16 19:15:17   notice : [mysqlmon] A Master Server is now available: 10.56.228.64:3306

I ran VMSTAT and noticed that the server runs out of memory

[root@maxscale-isa-02 ~]# vmstat 
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0      0 1485828      0 356652    0    0     1     0    0    0  0  0 100  0  0
    [root@maxscale-isa-02 ~]# vmstat 
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0      0 368000      0 356496    0    0     1     0    0    0  0  0 100  0  0
[root@maxscale-isa-02 ~]# vmstat 
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 2  0      0 173388      0 356512    0    0     1     0    0    0  0  0 100  0  0
[root@maxscale-isa-02 ~]# vmstat 
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  1      0 103660      0 308088    0    0     1     0    0    0  0  0 100  0  0
[root@maxscale-isa-02 ~]# vmstat 
-bash: fork: Cannot allocate memory
[root@maxscale-isa-02 ~]# vmstat 
-bash: fork: Cannot allocate memory
[root@maxscale-isa-02 ~]# vmstat 
-bash: fork: Cannot allocate memory
[root@maxscale-isa-02 ~]# vmstat 
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0      0 2478676      0 323508    0    0     1     0    0    0  0  0 100  0  0

Edit

We added an additional 4G RAM to the max scale server for 4GB to 8GB total.

We can now see that it will read up to the 5G of ram for the import process with out crashing.
This seams to indicate that the server running maxscale will need as much memory as all of the services running queries through the server will require at peak memory usage. When using the ReadWriteSplit router.

We are going to test the ReadConnRoute router to see if we can lower the memory usage requirements.

Was it helpful?

Solution

Update:

The amount of memory MaxScale uses for data buffering can be limited with the writeq_high_water and writeq_low_water parameters. This is the recommended way of dealing with excessive memory use in MaxScale.


MaxScale should stream the LOAD DATA LOCAL INFILE directly to the server without buffering it.

As MaxScale uses non-blocking IO, some buffering can occur if the client side network has higher throughput than the backend side network. If this happens, it could be that MaxScale is forced to buffer the data until the network buffers on the backend side are emptied.

I did a quick test with a 1.5GB CSV file and a VM with 1GB memory. I was running MaxScale with the readconnroute router. Loading the file from the same machine caused a peak memory usage of around 90% for the MaxScale process. This leads me to believe that this is either a bug in MaxScale or an inherent limitation of the way MaxScale buffers data.

I would recommend opening a bug report on the MariaDB Jira under the MaxScale project to track this issue: https://jira.mariadb.org/browse/MXS

For the time being, I would say that adding more memory seems like an acceptable workaround for this.

OTHER TIPS

When you only had 4G of RAM in your server and your LOAD infile had 5G of data, running out of memory is a reasonable concept. Not very nice, but reasonable. You may want to implement PRE PROCESSING to split your LOAD infile into multiple smaller files for processing. Or submit an enhancement request to maxscale.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top