Question

I've been working on this for days, pretty frustrated.

Have a Magento database, about 1Gb with 3MM records - need to make a backup and import it onto my local machine. Local machine is running WAMP on a brand new gaming rig specs with 16 Gb RAM). Exported the db fine using PHPMyAdmin into a .sql file.

Saw BigDump was highly recommended to import a large db. Also find a link that says it's recommended for the syntax to include column names in every INSERT statement Done. ( http://www.atomicsmash.co.uk/blog/import-large-sql-databases/ )

Start importing. Hours go by (around 3-4). Get an error: Page unavailable, or wrong url! More searching, try suggestions ( mostly here: http://www.sitehostingtalk.com/f16/bigdump-error-page-unavailable-wrong-url-56939/ ) to drop the $linespersession to 500 and add a $delaypersession of 300. Run again, more hours, same error.

I then re-exported the db into two .sql dumps (one that held all the large tables with over 100K records), repeat, same error. So I quit using Bigdump.

Next up was the command line! Using Console2 I ran source mydump.sql. 30 hours go by. Then an error:

ERROR 1231 (42000): Variable 'character_set_client' can't be set to the value of 'NULL' ERROR 1231 (42000): Variable 'collation_connection' can't be set to the value of 'NULL'

More searching, really varied explanations. I tried with the split files from before - run it again, same error.

I can't figure out what would cause both of these errors. I know that I got the same error on two different exports. I know there are a few tables that are between 1-300,000 rows. I also don't think 30 hours is normal (on a screaming fast machine) for an import of only a 1Gb but I could be wrong.

What other options should I try? Is it the format of the export? Should it be compressed or not? Is there a faster way of importing? Any way of making this go faster?

Thanks!

EDIT

Thanks to some searching and @Bill Karwin suggestion here's where I'm at:

  • Grabbed a new mysqldump using ssh and downloaded it.
  • Imported the database 10 different times. Each time was MUCH faster (5-10 mins) so that fixed the ridiculous import time.
    • used command line, >source dump.sql
  • However, each import from that same dump.sql file has a different number of records. Of the 3 million records they differ by between 600 and 200,000 records. One of the imports has 12,000 MORE records than the original. I've tried with and without setting the foreign_key_checks = 0; I tried running the same query multiple times with exactly the same settings. Every time the number of rows are different.

I'm also getting these errors now:

ERROR 1231 (42000): Variable 'time_zone' can't be set to the value of 'NULL' ERROR 1231 (42000): Variable 'sql_mode' can't be set to the value of 'NULL' ERROR 1231 (42000): Variable 'foreign_key_checks' can't be set to the value of 'NULL' ERROR 1231 (42000): Variable 'unique_checks' can't be set to the value of 'NULL' ERROR 1231 (42000): Variable 'character_set_client' can't be set to the value of 'NULL' ERROR 1231 (42000): Variable 'collation_connection' can't be set to the value of 'NULL' ERROR 1231 (42000): Variable 'sql_notes' can't be set to the value of 'NULL'

Doesn't seem like these are that important from what I read. There are other warnings but I can't seem to determine what they are.

Any ideas?

EDIT: Solution removed here and listed below as a separate post

References:

https://serverfault.com/questions/244725/how-to-is-mysqls-net-buffer-length-config-viewed-and-reset

http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar_net_buffer_length

Make phpMyAdmin show exact number of records for InnoDB tables?

Export a large MySQL table as multiple smaller files

https://dba.stackexchange.com/questions/31197/why-max-allowed-packet-is-larger-in-mysqldump-than-mysqld-in-my-cnf

Was it helpful?

Solution

No, that's not a normal restore time, unless you're running MySQL on a 15 year old computer or you're trying to write the database to a shared volume over a very slow network. I can import a data dump of about that size in about 45 minutes, even on an x-small EC2 instance.

The error about setting variables to NULL appears to be a limitation of BigDump. It's mentioned in the BigDump FAQ. I have never seen those errors from restoring a dump file with the command-line client.

So here are some recommendations:

  • Make sure your local MySQL data directory is on a locally-attached drive -- not a network drive.

  • Use the mysql command-line client, not phpMyAdmin or BigDump.

    mysql> source mydump.sql
    
  • Dump files are mostly a long list of INSERT statements, you can read Speed of INSERT Statements for tips on speeding up INSERT. Be sure to read the sub-pages it links to.

  • For example, when you export the database, check the radiobutton for "insert multiple rows in every INSERT statement" (this is incompatible with BigDump, but better for performance when you use source in the mysql client).

  • Durability settings are recommended for production use, but they come with some performance penalties. It sounds like you're just trying to get a development instance running, so reducing the durability may be worthwhile, at least while you do your import. A good summary of reducing durability is found in MySQL Community Manager Morgan Tocker's blog: Reducing MySQL durability for testing.


Re your new questions and errors:

A lot of people report similar errors when importing a large dump file created by phpMyAdmin or Drupal or other tools.

The most likely cause is that you have some data in the dump file that is larger than max_allowed_packet. This MySQL config setting is the largest size for an individual SQL statement or an individual row of data. When you exceed this in an individual SQL statement, the server aborts that SQL statement, and closes your connection. The mysql client tries to reconnect automatically and resume sourcing the dump file, but there are two side-effects:

  • Some of your rows of data failed to load.
  • The session variables that preserve @time_zone and other settings during the import are lost, because they are scoped to the session. When the reconnect happens, you get a new session.

The fix is to increase your max_allowed_packet. The default level is 4MB on MySQL 5.6, and only 1MB on earlier versions. You can find out what your current value for this config is:

mysql> SELECT @@max_allowed_packet;
+----------------------+
| @@max_allowed_packet |
+----------------------+
|              4194304 |
+----------------------+

You can increase it as high as 1GB:

mysql> set global max_allowed_packet = 1024*1024*1024;

Then try the import again:

mysql> source mydump.sql

Also, if you're measuring the size of the tables with a command like SHOW TABLE STATUS or a query against INFORMATION_SCHEMA.TABLES, you should know that the TABLE_ROWS count is only an estimate -- it can be pretty far off, like +/- 10% (or more) of the actual number of rows of the table. The number reported is even likely to change from time to time, even if you haven't changed any data in the table. The only true way to count rows in a table is with SELECT COUNT(*) FROM SomeTable.

OTHER TIPS

SOLUTION

For anyone who wanted a step by step:

  • Using PuTTY, grab a mysql dump of the database (don't include everything to the right of the > and replace all capitals with the appropriate info)

> mysqldump -uUSERNAME -p DATABASENAME > DATABASE_DUMP_FILE_NAME.sql

  • You'll get a password prompt, type it in, hit enter. Wait till you get a prompt again. If you're using an FTP client go to the root of your host and you should see your file there, download it.
  • Locally get a mysql prompt by navigating to where your mysql.exe file is (there's a few ways of doing this, this is one of them) and typing:

> mysql.exe -use NEW_DATABASE -u USERNAME

  • Now you're in the mysql prompt. Turn on warnings...just in case

mysql > \W;

  • Increase the max_allowed_packet to a true Gig. I've seen references to also changing the net_buffer_length but after 5.1.31 it doesn't seem to be changed (link at bottom)

mysql > SET global max_allowed_packet = 1024*1024*1024;

  • Now import your sql file

mysql > source C:\path\to\DATABASE_DUMP_FILE_NAME.sql

If you want to check if all of the records imported you could either type SELECT COUNT(*) FROM SomeTable OR

  • Go to C:\wamp\apps\phpmyadmin\config.inc.php
  • At the bottom before the ?> add:

/* Show the exact count of each row */

$cfg['MaxExactCount'] = 2000000;

  • This is only recommended for a development platform - but really handy when you have to scan a bunch of tables / databases. Will probably slow down the works with large sets.
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top