Question

I have plenty of tables in MySQL which which contains zero date in dateTime column 0000-00-00 00:00:00

Using some sort of admin settings, Is it possible to disable zero dates and replace all zero with static value say 1-1-1900?

EDIT:

I am working on database migration which involves migrating more than 100 MySQL tables to SQL Server.

Can I avoid executing scripts on each table manually by setting up database mode?

Was it helpful?

Solution

To change existings values you could use a query like this:

UPDATE tablename SET date_column = '1900-01-01' WHERE date_column = '0000-00-00';

If you want to automate the UPDATE query you can use a prepared statement:

SET @sql_update=CONCAT_WS(' ', 'UPDATE', CONCAT(_schema, '.', _table),
                               'SET', _column, '=', '\'1900-01-01\'',
                               'WHERE', _column, '=', '\'0000-00-00\'');

PREPARE stmt FROM @sql_update;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;

And you can loop through all colums in all tables on the current schema that are declared as date:

SELECT
  table_schema,
  table_name,
  column_name
FROM
  information_schema.columns
WHERE
  table_schema=DATABASE() AND data_type LIKE 'date%'

To loop through all columns you could use a stored procedure:

DELIMITER //
CREATE PROCEDURE update_all_tables() BEGIN
  DECLARE done BOOLEAN DEFAULT FALSE;
  DECLARE _schema VARCHAR(255);
  DECLARE _table VARCHAR(255);
  DECLARE _column VARCHAR(255);
  DECLARE cur CURSOR FOR SELECT
                           CONCAT('`', REPLACE(table_schema, '`', '``'), '`'),
                           CONCAT('`', REPLACE(table_name, '`', '``'), '`'),
                           CONCAT('`', REPLACE(column_name, '`', '``'), '`')
                         FROM
                           information_schema.columns
                         WHERE
                           table_schema=DATABASE() AND data_type LIKE 'date%';

  DECLARE CONTINUE HANDLER FOR NOT FOUND SET done := TRUE;

  OPEN cur;

  columnsLoop: LOOP
    FETCH cur INTO _schema, _table, _column;
    IF done THEN
      LEAVE columnsLoop;
    END IF;   

    SET @sql_update=CONCAT_WS(' ', 'UPDATE', CONCAT(_schema, '.', _table),
                                   'SET', _column, '=', '\'1900-01-01\'',
                                   'WHERE', _column, '=', '\'0000-00-00\'');

    PREPARE stmt FROM @sql_update;
    EXECUTE stmt;
    DEALLOCATE PREPARE stmt;

  END LOOP columnsLoop;

  CLOSE cur;
END//
DELIMITER ;

Please see an example here.

OTHER TIPS

This is an old question but was running into a similar problem except I was trying to set the 0000-00-00 to NULL.

Was trying to query this

UPDATE tablename SET date_column = NULL WHERE date_column = '0000-00-00';

and was getting the following error :

Incorrect date value: '0000-00-00' for column 'date_column' at row 1

Turns out the following query without '' around the 0000-00-00 worked !

 UPDATE tablename SET date_column = NULL WHERE date_column = 0000-00-00;

You can change existing values running that query

update your_table
set date_column = '1900-01-01'
where date_column = '0000-00-00'

And you can change the definition of your table to a specfic default value or null like this

ALTER TABLE your_table 
CHANGE date_column date_column date NOT NULL DEFAULT '1900-01-01'

You have two options.

Option One - In the programming language of your choice (you can even do this with Stored Procedures):

  1. Loop through your INFORMATION_SCHEMA, probably COLUMNS and build a query to get back the tables you need to affect, i.e.

-

SELECT TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS 
WHERE COLUMN_NAME='date' AND TABLE_SCHEMA='<YOUR DB NAME>'

or maybe even better

SELECT TABLE_NAME,COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS 
WHERE COLUMN_NAME in ('timestamp','date','datetime')
AND TABLE_SCHEMA='<YOUR DB NAME>'
  1. Store results and then loop through them. Each loop, create a new query. In MySQL that would be a Stored Procedure with Prepared Statements, AKA:

-

@string = CONCAT("UPDATE ", @table_name, " SET ", @column_name, "='1-1-1900' WHERE ", @column_name, "=0000-00-00 00:00:00");

PREPARE stmt FROM @string; EXECUTE stmt;

That wouldn't be too tough to write up.

Option Two - Another example, while certainly more low tech, may be no less effective. After doing a mysqldump and before doing your export, you can do a simple search-replace in the file. Vim or any other text editor would do this quite expertly and would allow you to replace 0000-00-00 00:00:00 with 1-1-1900. Because you are almost definitely not going to find situations where you DON'T want that to be replaced, this could be the easiest option for you. Just throwing it out there!

In my opinion, you could generate all updates the simplest way:

select
concat('UPDATE ',TABLE_NAME,' SET ',COLUMN_NAME,'=NULL WHERE ',COLUMN_NAME,'=0;')
from information_schema.COLUMNS 
where TABLE_SCHEMA = 'DATABASE_NAME' and DATA_TYPE in ('datetime', 'date', 'time');

Just replace DATABASE_NAME to your DB name, and execute all updates.

Alter your Table as

ALTER TABLE `test_table`
  CHANGE COLUMN `created_dt` `created_dt` date NOT NULL DEFAULT '1900-01-01';

but before Altering table you need to update the existing value as juergen d said

update test_table
set created_dt= '1900-01-01'
where created_dt= '0000-00-00'

You can update your table by filtering where dates are equals to 0 and you can define a default value to the column.

Prefix: You might want to check the concept of ETL in DataWareHousing, there are tools which should do the simple conversions for you, even Open Source ones like Kettle/Pentaho.

But this one is easy when you use any programming language capable of composing SQL queries. I have made an example in perl, but php or java would do also the job:

#!/usr/bin/perl

use strict;
use warnings;
use DBI;

my $user='geheim';
my $pass='secret';

my $dbh = DBI->connect( "dbi:mysql:host=localhost:database=to_convert:port=3306", $user, $pass ) or die $DBI::errstr;

# Prints out all the statements needed, might be checked before executed
my @tables = @{ $dbh->selectall_arrayref("show tables") };
  foreach my $tableh ( @tables){
    my $tabname = $tableh->[0];
    my $sth=$dbh->prepare("explain $tabname");
    $sth->execute();
    while (my $colinfo = $sth->fetchrow_hashref){
      if ($colinfo->{'Type'} =~ /date/i && $colinfo->{'Null'} =~ /yes/i){
        print ("update \`$tabname\` set \`" . $colinfo->{'Field'} . "\` = '1990-01-01' where \`" . $colinfo->{'Field'} . "\` IS NULL; \n");
        print ("alter table \`$tabname\` change column \`" . $colinfo->{'Field'} . "\`  \`" . $colinfo->{'Field'} . "\` " . $colinfo->{'Type'} . " not null default '1990-01-01'; \n");
      }
    }
  }

This does not change anything, but when the database has tables like:

localmysql [localhost]> explain dt;
+-------+------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+------+------+-----+---------+-------+
| a     | date | YES  |     | NULL    |       |
+-------+------+------+-----+---------+-------+
1 row in set (0.00 sec)

localmysql [localhost]> explain tst
    -> ;
+-------+----------+------+-----+---------+-------+
| Field | Type     | Null | Key | Default | Extra |
+-------+----------+------+-----+---------+-------+
| id    | int(11)  | YES  |     | NULL    |       |
| atime | datetime | YES  |     | NULL    |       |
+-------+----------+------+-----+---------+-------+
2 rows in set (0.00 sec)

it produces the Statements:

update `dt` set `a` = '1990-01-01' where `a` IS NULL; 
alter table `dt` change column `a`  `a` date not null default '1990-01-01'; 
update `tst` set `atime` = '1990-01-01' where `atime` IS NULL; 
alter table `tst` change column `atime`  `atime` datetime not null default '1990-01-01'; 

This list can then be reviewed and executed as Statements.

Hope that Helps!

As this is for migration, I would suggest that you simply wrap your tables in views which does the conversion as you export the data. I have used the below concept moving data from MySQL to postgress which has the same problem.

Each table should be proxied by something like this;

CREATE VIEW migration_mytable AS 
SELECT field1, field2, 
    CASE field3
         WHEN '0000-00-00 00:00:00' 
         THEN '1900-01-01 00:00:00' 
         ELSE field3
    END CASE AS field3
FROM mytable;

You should be able to write a script which generate this for you from the catalog, in case you have a great deal of tables to take care of.

You should then be able to import the data into your SqlServer table (using a bridge like this), and simply running a query like;

INSERT INTO sqlserver.mytable SELECT * FROM mysql.migration_mytable;
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top