No, that's not a normal restore time, unless you're running MySQL on a 15 year old computer or you're trying to write the database to a shared volume over a very slow network. I can import a data dump of about that size in about 45 minutes, even on an x-small EC2 instance.
The error about setting variables to NULL appears to be a limitation of BigDump. It's mentioned in the BigDump FAQ. I have never seen those errors from restoring a dump file with the command-line client.
So here are some recommendations:
Make sure your local MySQL data directory is on a locally-attached drive -- not a network drive.
Use the
mysql
command-line client, not phpMyAdmin or BigDump.mysql> source mydump.sql
Dump files are mostly a long list of INSERT statements, you can read Speed of INSERT Statements for tips on speeding up INSERT. Be sure to read the sub-pages it links to.
For example, when you export the database, check the radiobutton for "insert multiple rows in every INSERT statement" (this is incompatible with BigDump, but better for performance when you use
source
in the mysql client).Durability settings are recommended for production use, but they come with some performance penalties. It sounds like you're just trying to get a development instance running, so reducing the durability may be worthwhile, at least while you do your import. A good summary of reducing durability is found in MySQL Community Manager Morgan Tocker's blog: Reducing MySQL durability for testing.
Re your new questions and errors:
A lot of people report similar errors when importing a large dump file created by phpMyAdmin or Drupal or other tools.
The most likely cause is that you have some data in the dump file that is larger than max_allowed_packet
. This MySQL config setting is the largest size for an individual SQL statement or an individual row of data. When you exceed this in an individual SQL statement, the server aborts that SQL statement, and closes your connection. The mysql client tries to reconnect automatically and resume sourcing the dump file, but there are two side-effects:
- Some of your rows of data failed to load.
- The session variables that preserve
@time_zone
and other settings during the import are lost, because they are scoped to the session. When the reconnect happens, you get a new session.
The fix is to increase your max_allowed_packet
. The default level is 4MB on MySQL 5.6, and only 1MB on earlier versions. You can find out what your current value for this config is:
mysql> SELECT @@max_allowed_packet;
+----------------------+
| @@max_allowed_packet |
+----------------------+
| 4194304 |
+----------------------+
You can increase it as high as 1GB:
mysql> set global max_allowed_packet = 1024*1024*1024;
Then try the import again:
mysql> source mydump.sql
Also, if you're measuring the size of the tables with a command like SHOW TABLE STATUS
or a query against INFORMATION_SCHEMA.TABLES
, you should know that the TABLE_ROWS
count is only an estimate -- it can be pretty far off, like +/- 10% (or more) of the actual number of rows of the table. The number reported is even likely to change from time to time, even if you haven't changed any data in the table. The only true way to count rows in a table is with SELECT COUNT(*) FROM SomeTable
.