如何获取使用mysqldump和单成交正确的转储时DDL是在同一时间使用?

StackOverflow https://stackoverflow.com/questions/451404

  •  19-08-2019
  •  | 
  •  

我新的MySQL和我搞清楚执行一上线热逻辑备份使用mysqldump的最佳方式。 此页表明该命令行:

mysqldump --single-transaction --flush-logs --master-data=2
          --all-databases > backup_sunday_1_PM.sql

但是...如果你阅读文档仔细你发现

  

虽然--single-transaction转储是在过程中,以确保有效的转储文件   (正确的表的内容和二进制日志的位置),没有其他的连接应使用   下面的语句:ALTER TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE。一种   一致的读取不是从这些语句中分离,所以在桌子上使用它们来   倾倒可能导致由SELECT执行mysqldump检索表内容   以获得不正确的内容或失败。

那么,有没有什么办法来防止这种可能的转储损坏的情况? 即一个命令,可以暂时阻止那些语句。

PS:关于这个主题的MySQL错误条目 http://bugs.mysql.com/ bug.php?ID = 27850

有帮助吗?

解决方案

打开mysql命令窗口,并发出以下命令:

mysql> FLUSH TABLES WITH READ LOCK;

这将锁定的所有的表中的所有的这个MySQL实例数据库,直到您发出UNLOCK TABLES(或终止持有这些读锁定客户端连接)。

要确认这一点,你可以打开另一个命令窗口,并尝试做一个ALTERDROPRENAMETRUNCATE。这些命令挂起,等待读取锁被释放。按下Ctrl-C以终止等待。

不过,虽然表有一个读锁,你仍然可以执行mysqldump备份。

FLUSH TABLES WITH READ LOCK命令是相同的使用--lock-all-tablesmysqldump选项。这并不完全清楚,但这个文档似乎支持它:

  

有UNLOCK TABLES另一个用途是   发布收购了全球读锁   与读锁FLUSH TABLES。

这两个FLUSH TABLES WITH READ LOCK--lock-all-tables使用短语“全球读锁,”所以我认为这是有可能的是,这些做同样的事情。因此,你应该能够使用该选项mysqldump并防止并发修改,删除,重命名和TRUNCATE。


重新。您的评论:以下是从吉扬Bichot在MySQL错误日志,你链接到:

  

您好。 --lock-全表调用flush   WITH READ LOCK TABLES。因此,它是   预计将阻止修改,删除,重命名,   或TRUNCATE(除非有错误或   我错了)。然而,--lock-全表   --single事务不能工作(mysqldump的引发错误消息):   因为锁全表的锁全部   针对写入服务器的表   备份的时间,   而单交易意   让备份期间发生写入   (通过使用一个一致的阅读选择在   交易),它们是不兼容   在自然界中。

此,它听起来就像你不能得到一个备份过程中的并发访问,同时阻止修改,删除,重命名和TRUNCATE。

其他提示

I thought the same thing reading that part of the documentation though, I found more information:

4.5.4. mysqldump — A Database Backup Program http://dev.mysql.com/doc/en/mysqldump.html

For InnoDB tables, mysqldump provides a way of making an online backup:

shell> mysqldump --all-databases --single-transaction > all_databases.sql

This backup acquires a global read lock on all tables (using FLUSH TABLES WITH READ LOCK) at the beginning of the dump. As soon as this lock has been acquired, the binary log coordinates are read and the lock is released. If long updating statements are running when the FLUSH statement is issued, the MySQL server may get stalled until those statements finish. After that, the dump becomes lock free and does not disturb reads and writes on the tables. If the update statements that the MySQL server receives are short (in terms of execution time), the initial lock period should not be noticeable, even with many updates.

There is a conflict with the --opt and --single-transaction options:

--opt

This option is shorthand. It is the same as specifying --add-drop-table --add-locks --create-options --disable-keys --extended-insert --lock-tables --quick --set-charset. It should give you a fast dump operation and produce a dump file that can be reloaded into a MySQL server quickly.

The --opt option is enabled by default. Use --skip-opt to disable it.

If I understand your question correctly you want the actual data and the DDL (Data Definition Language) together, because if you only want the DDL you would use --no-data. More information about this can be found at:

http://dev.mysql.com/doc/workbench/en/wb-reverse-engineer-create-script.html

Use the --databases option with mysqldump if you wish to create the database as well as all its objects. If there is no CREATE DATABASE db_name statement in your script file, you must import the database objects into an existing schema or, if there is no schema, a new unnamed schema is created.

As suggested by The Definitive Guide to MySQL 5 By Michael Kofler I would suggest the follow options:

--skip-opt
--single-transaction
--add-drop-table
--create-options
--quick
--extended-insert
--set-charset
--disable-keys

Additionally, not mentioned is --order-by-primary Also if you are using the --databases option, you should also use --add-drop-database especially if combined with this answer If you are backing up databases that are connect on different networks you may need to use the --compress option.

So a mysqldump command (without using the --compress, --databases, or --add-drop-database options) would be :

mysqldump --skip-opt --order-by-primary --single-transaction --add-drop-table --create-options --quick --extended-insert --set-charset -h db_host -u username --password="myPassword" db_name | mysql --host=other_host db_name

I removed the reference to --disable-keys that was given in the book as it is not effective with InnoDB as i understand it. The MySql manual states:

For each table, surround the INSERT statements with /*!40000 ALTER TABLE tbl_name DISABLE KEYS /; and /!40000 ALTER TABLE tbl_name ENABLE KEYS */; statements. This makes loading the dump file faster because the indexes are created after all rows are inserted. This option is effective only for nonunique indexes of MyISAM tables.

I also found this bug report http://bugs.mysql.com/bug.php?id=64309 which has comments on the bottom from Paul DuBois who also wrote a few books to which I have no reference on this specific issue other than those comments found within that bug report.

Now to create the "Ultimate Backup" I would suggest to consider something along the lines of this shell script

  1. https://github.com/red-ant/mysql-svn-backup/blob/master/mysql-svn.sh

You can't get a consistent dump without locking tables. I just do mine during a time of day that the 2 minutes it takes to do the dump isn't noticed.

One solution is to do replication, then back up the slave instead of the master. If the slave misses writes during the backup, it will just catch up later. This will also leave you with a live backup server in case the master fails. Which is nice.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top