Question

I wanted to backup my database with PHP.

I tested the linked script but it was never ending, I tried to prepend the repair $table before the query but it didn't help.

So I figured out if I just skip two tables (you can see in the code) then it works fine:

<?

error_reporting(E_ALL);
ini_set('error_reporting',1);
require('../basedatos.php');

echo 'included<br>';
/* backup the db OR just a table */
function backup_tables($host,$user,$pass,$name,$tables = '*')
{


    echo '1<br>';
    //get all of the tables
    if($tables == '*')
    {
        $tables = array();
        $result = mysql_query('SHOW TABLES') or die(msyql_error());
        while($row = mysql_fetch_row($result))
        {
            $tables[] = $row[0];
        }
    }
    else
    {
        $tables = is_array($tables) ? $tables : explode(',',$tables);
    }
    echo '2<br>';
    //cycle through
    foreach($tables as $table)
    {
        if($table == 'etiquetas' || $table == 'links') continue;
        $repair = mysql_query("REPAIR table $table") or die(mysql_error());
        echo '3- '.$table.'<br>';
        $result = mysql_query('SELECT * FROM '.$table) or die(msyql_error());
        $num_fields = mysql_num_fields($result);

        $return.= 'DROP TABLE '.$table.';';
        $row2 = mysql_fetch_row(mysql_query('SHOW CREATE TABLE '.$table))  or die(msyql_error());
        $return.= "\n\n".$row2[1].";\n\n";

        for ($i = 0; $i < $num_fields; $i++) 
        {
            while($row = mysql_fetch_row($result))
            {
                $return.= 'INSERT INTO '.$table.' VALUES(';
                for($j=0; $j<$num_fields; $j++) 
                {
                    $row[$j] = addslashes($row[$j]);
                    $row[$j] = ereg_replace("\n","\\n",$row[$j]);
                    if (isset($row[$j])) { $return.= '"'.$row[$j].'"' ; } else { $return.= '""'; }
                    if ($j<($num_fields-1)) { $return.= ','; }
                }
                $return.= ");\n";
            }
        }
        $return.="\n\n\n";

    }
    echo '4<br>';
    //save file
    $handle = fopen('db-backup-'.time().'-'.(md5(implode(',',$tables))).'.sql','w+');
    fwrite($handle,$return);
    fclose($handle);
}
backup_tables('localhost','username','password','*');
?>

Is there any way to find the rows that are giving me a problem so I can edit/delete them?

-PS-

Also, I don't get any errors if I don't skip them (the script just never gets to the end, that's why I added some ugly logs.., any idea why?

-EDIT-

Also, if I try to export the database via, for example, sqlBuddy I also get errors:

enter image description here

Was it helpful?

Solution

As stated by many, this script (and the simple "MySQL dump via PHP" thing) is far from optimal, but still better than no backup at all.

Since you can only use PHP to access the database you should use it to find out what is going wrong.

Here is an adaptation of your script, that will dump only one table to a file. It's a debug script, not an export tool for production (however, do what you want with it), that's why it outputs debug after saving every single row of the table.

As suggested by Amit Kriplani, data is appended to destination file at each iteration, but I don't think PHP memory is your problem, you should get a PHP error if you run out of memory, or at least a HTTP 500 should be thrown by the server instead of running the script forever.

function progress_export( $file, $table, $idField, $offset = 0, $limit = 0 )
{

    debug("Starting export of table $table to file $file");

    // empty the output file
    file_put_contents( $file, '' );
    $return = '';

    
    debug("Dumping schema");

    $return.= 'DROP TABLE '.$table.';';
    $row2 = mysql_fetch_row(mysql_query("SHOW CREATE TABLE $table"));
    $return.= "\n\n".$row2[1].";\n\n";

    
    file_put_contents( $file, $return, FILE_APPEND );

    debug("Schema saved to $file");




    $return = '';

    debug( "Querying database for records" );

    $query = "SELECT * FROM $table ORDER BY $idField";

    // make offset/limit optional if we need them for further debug
    if ( $offset && $limit )
    {
        $query .= " LIMIT $offset, $limit";
    }

    $result = mysql_query($query);
    
    $i = 0;
    while( $data = mysql_fetch_assoc( $result ) )
    {
        // Let's be verbose but at least, we will see when something goes wrong
        debug( "Exporting row #".$data[$idField].", rows offset is $i...");

        $return = "INSERT INTO $table (`". implode('`, `', array_keys( $data ) )."`) VALUES (";
        $coma = '';

        foreach( $data as $column )
        {
            $return .= $coma. "'". mysql_real_escape_string( $column )."'";
            $coma = ', ';
        }

        $return .=");\n";

        file_put_contents( $file, $return, FILE_APPEND );

        debug( "Row #".$data[$idField]." saved");

        $i++;
        
        // Be sure to send data to browser
        @ob_flush();
    }

    debug( "Completed export of $table to file $file" );
}



function debug( $message )
{
    echo '['.date( "H:i:s" )."] $message <br/>";
}


// Update those settings :

$host = 'localhost';
$user = 'user';
$pass = 'pwd';
$base = 'database';

// Primary key to be sure how record are sorted
$idField = "id"; 

$table   = "etiquetas";

// add some writable directory
$file = "$table.sql";


$link = mysql_connect($host,$user,$pass);
mysql_select_db($base,$link); 




// Launching the script
progress_export( $file, $table, $idField );

Edit the settings at the end of script and run it against one of your two tables.

You should see output while the script is still processing, and get some references about rows being processed, like this :

[23:30:13] Starting export of table ezcontentobject to file ezcontentobject.sql

[23:30:13] Dumping schema

[23:30:13] Schema saved to ezcontentobject.sql

[23:30:13] Querying database for records

[23:30:13] Exporting row #4, rows offset is 0...

[23:30:13] Row #4 saved

[23:30:13] Exporting row #10, rows offset is 1...

[23:30:13] Row #10 saved

[23:30:13] Exporting row #11, rows offset is 2...

[23:30:13] Row #11 saved

[23:30:13] Exporting row #12, rows offset is 3...

[23:30:13] Row #12 saved

[23:30:13] Exporting row #13, rows offset is 4...

[23:30:13] Row #13 saved

etc.

If the script completes...

well you will have a backup of your table (beware, I did not test the generated SQL) !

I guess it won't complete :

If the script does not reach the first "Exporting row..." debug statement

then the problem is at the query time.

You should then try to limit the query with offset and limit parameters, proceed with dichotomy to find out where it hangs

Example generating a query limited to the 1000 first results.

// Launching the script
progress_export( $file, $table, $idField, 0, 1000 );

If the script shows some rows being exported before hanging

before incriminating the last row id displayed, you should try to :

  1. Run the script again, to see if it hangs on the same row. This is to see if we are facing a "random" issue (it is never really random).

  2. Add an offset to the function call (see optional parameters), and run the script a third time, to see if it still hangs on the same row.

for example 50 as offset, and some big number as limit :

// Launching the script
progress_export( $file, $table, $idField, 50, 600000 );

This is to check if the row it self is causing the issue, or if it is a critical number of rows / amount of data...

  • If the same last row comes back every time, please inspect it and give us feed back.

  • If adding an offset change the last processed row, in a predictable way, we likely face a resources issue somewhere.

The solution will then be to split export into chunks if you can't play on allocated resources. You can accomplish this with a script close to this one, outputing some HTML/javascript, to redirect to it self, with offset and limit as parameters, while export is not finish (I'll edit the answer if it is what we eventually need)

  • If the row changes almost every time, it's going to be more complicated...

Some clues :

I don't have any experience with VPS, but don't you have some limitations on CPU usage ?

Could it be that your process is queued is some way if you use too much ressources at a time ?

What about tables that are dumped without issue ? Are there tables as large as the two causing the issue ?

OTHER TIPS

I don't know why this "block" ... but the script will only work for very basic databases.

For example, how does it handle foreign keys constraints? This is only a suggestion, and probably you have discarded it on purpose, but why not using mysql_dump?

From your shell:

mysql_dump -h host -u user -p my_database > db-backup.sql

EDIT: As suggested by Riggs Folly, phpMyAdmin has backup facilities and is usually available on hosting.

In fact, even if not available on your hosting, you could still install it on your http server and configure it to access the remote DB server:

http://www.mittalpatel.co.in/access_mysql_database_hosted_remote_server_using_phpmyadmin

Instead of saving the output to a variable, use file_put_contents with FILE_APPEND flag to write the query to a file. In case you think it's taking a lot of time, you can inspect the file with a viewer or create the file in webroot directory and open it in browser to see what's happening...

If you have the same backup problem either with some custom scripts and with tools such as "sqlbuddy" -- the conclusion is that the problem is with your tables and/or the DB more generally.

I would try to copy the problematic tables and backup those instead of the original one, to see what appends:

CREATE TABLE etiquetas_copy AS SELECT * FROM etiquetas

If you cannot backup the copies, my guess is the number of rows is definitively the problem. Some providers arbitrary (and sometimes silently) kill scripts or DB requests that use too much resources.

You could try to do your backup 1000 rows at a time as suggested in a comment. Something like that:

    $result = mysql_query('SELECT * FROM '.$table." LIMIT $n,$n+1000") or die(msyql_error());

You will have to wrap that and the few lines around is a loop that "turns" while the number of fetched rows is 1000 in order to read the following batch once those 1000 lines are processed.

Have you tried to run your script only on the problematic tables ? Do theses tables have binary fields (like BLOBs ?).

Maybe you could try escaping the output of the fields before handling it in PHP : select HEX(field1), HEX(field2), HEX(field3) FROM links

And then write your INSERT statement like this : INSERT INTO links (field1,field2,field3) VALUES (UNHEX(),UNHEX(),UNHEX());

Also, you should use preg_replace instead of ereg_replace. It is like 100 times faster. So if you're doing it on large data, it might slow down your script.

Finally you should really look into your error logging configuration, because an error should occur. Wether it is a php error, a memory limit error a max execution time error etc.

Good like with your project.

The script you've chosen is a waste of time. There sure are better ones, a classic and maintained one exactly for the job is Mysqldumper. Anyway, I don't want only to give tool recommendations here but also read your question that you wonder why this happens.

If you really want to find out, here is a tip from the trouble-shooting department: You most likely hit a memory limit here. However you don't see that because this might be a hard limit on your server. Under such circumstances, the script is just killed without PHP giving any more error message.

Also instead of only reporting error to STDOUT/STDERR you can log to file. I really suggest you that for trouble-shooting. This includes tailing logs of the operating system itself like /var/log/messages but also configuring PHP writing to a logfile (for hard limits, you won't see anything in there as PHP is just killed, but I still suggest you make yourself comfortable and understand how to enable PHP error logging and how to obtain it).

And for the sheer amount of data you have her, if you put this all into one string in memory per table, then this is just too large. A simple adoption of the script would append to a file instead to the string. That will keep memory requirements low (but will increase disk I/O). That is just a common trade-off (RAM memory vs. disk storage). You normally first prefer RAM because it's faster, however you don't have too much. The original script did reflect this only per table, it does not support string buffered writes by string-length.

BTW, even if the server you run this on has - let's say - 64 GB of RAM for your script - it will fail if the string exceeds the size of 2 gigabytes because PHP has a string size limit.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top