Pergunta

I was just doing a bunch of importing of an old database, and as I was watching the results roll by, I wondered, how/why does it know to convert 12/31/13 to 12/31/2013, but 12/31/99 to 12/31/1999? ("it" in this case is Python's strftime(%Y/%m/%d) function)

What happens in 2050? Will a record for 12/31/50 be interpreted by default as 2050, 1950, or 2150?

I realize it may vary depending on language, and that there are probably flags/options to specifically force the use of a certain century. But when you don't explicitly do so, what's making that judgment call by default?

Nenhuma solução correta

Outras dicas

It varies by database, each has a default time span.

SQL Server's default time span is: 1950-2049, so 50 becomes 1950, but 49 becomes 2049.

In MySQL and Oracle it's 1970-2069

In Postgresql it's centered on 2020, so whichever year is closest to 2020, which seems to also be 1970-2069

All of the defaults span into the future and therefore custom criteria is frequently necessary.

Two digit years can be confused, take 2010-05-07 and change it to a 2-digit year, 10-05-07? Nope, should be 05-07-10, at least in SQL Server:

SELECT CAST('2010-05-07' AS DATE)  --  2010-05-07
      ,CAST('10-05-07' AS DATE)    --  2007-10-05
      ,CAST('05-07-10' AS DATE)    --  2010-05-07

Invalid dates make it easy to determine the supplied date format usually.

You have to read the compilier documentation.

And more important: Don't use 2 digit year whenever possible!

Most languages/DBMSes got a default, but allow setting the century break to a custome value.

You should check documentation if/how this is possible.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top