Question

I haven't been able to find anything so I'm sure the answer is no, what are you, a noob? but I feel the need to ask anyway :)

Is there a simple utility that would allow a new slave server to be brought up on an existing master without the need to perform a dump on the master?

I've set up slaves in the traditional manner several times so I'm not without knowledge of the process, but I'm curious if anyone has felt the same as I do, that there must be an easier way.

I would expect such a utility would use something similar to Percona's Xtrabackup, but instead of outputting to a dump file, it would stream the output directly to the slave, and then automatically enable the slave relationship after the feed has ended.

Is this realistic?

I noticed the streaming option in Xtrabackup, but couldn't find examples outside of its ability to just save a tar on another server, which isn't what I was looking for. I want a solution that could stream directly into the destination DB without the requirement of working with a dump file at all. This is particularly handy when working with large datasets.

Was it helpful?

Solution

Something close to what you're asking can be done with mysqldump. You can stream its output directly to the slave.

# mysqldump -h master -A --master-data --single-transaction --quick | mysql -h slave

mysqldump will add CHANGE MASTER TO to the output. However, it doesn't include MASTER_HOST, MASTER_USER and MASTER_PASSWORD. It has to be configured separately.

# mysql -h slave -e "CHANGE MASTER TO MASTER_HOST='master',
  MASTER_USER='repl', MASTER_PASSWORD='replpass'; START SLAVE;"

Additional notes:

  1. You can run the abovementioned command on the master, or slave or any other host. Note the -h option in mysqldump and mysql. See mysql Options.
  2. The master is not down when you're taking a dump.
  3. There will not be any binlog ID issues because no matter where you run mysqldump -h master, the dump will come from the master. Respectively, the binlog coordinates will point to the binlog on the master.
  4. The defaults will cause InnoDB tables to be table locked, that is unless --single-transaction is specified which negates the requirement of a table lock for a consistent dump.
  5. Using --quick will ensure large tables are read one row at a time rather than buffering the entire row set in memory, which is more conducive to this type of streaming dump.

OTHER TIPS

XtraBackup supports streaming, see https://www.percona.com/doc/percona-xtrabackup/2.3/howtos/recipes_ibkx_stream.html

But you will need to modify one of examples (like innobackupex --stream=tar ./ | ssh user@desthost "cat - > /data/backups/backup.tar"), so it can apply log, then copy-back after streaming is finished.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top