Вопрос

I am using postgres and I have a table with a column whose data type is timestamp without time zone.

I would like to change the data type to bigint. I am trying to store number of seconds since 1970 in the column. So something big like 1397597908756.

I am using python, something like:

d = dict() # create a dictionary, has key 'timestamp'
#get data from server and store in array 
d.update(dict(timestamp=data[1]) #data[1] has the number of seconds 

I touch server many times so storing in dictionary is essential. The query is:

cursor.execute("INSERT into tablename columname VALUES (%s)", (quote['timestamp'];

At this point, an exception is thrown:

invalid input syntax for type timestamp: 1397597908756

So I tried to change the data type from timestamp without timezone to bigint. I did:

ALTER TABLE tablename ALTER COLUMN columnname
SET DATA TYPE bigint USING updated::bigint;

I got the following error:

ERROR: cannot cast type timestamp without time zone to bigint

Это было полезно?

Решение 2

Erwin's answer is correct, I just want to address another aspect.

I am trying to store number of seconds since 1970 in the column

Please don't do that. It's annoying to query and is of no storage benefit over just using a timestamp.

Store timestamp without time zone. Or, preferably, use timestamp with time zone so it's properly adjusted for the TimeZone setting of the client.

If the client app needs epoch seconds, it can always select extract(epoch from my_field), .... But really, your app should be able to consume dates properly anyway.

It isn't clear what problem you're attempting to solve in your application by switching to storing raw epoch seconds; there may be cases where you have to. But personally, rather than do something like this, I'd instead define an updatable view that returned epoch seconds and converted the input bigints into a timestamp for storage in the underlying table. So the application would think it had a table with epoch seconds, but it was really working with timestamps. Of course, I'd prefer to just make my app work with timestamps properly in the first place, converting to epoch seconds in the application if some client interface required it.

Update:

In your case, just insert a datetime.datetime object.

import datetime

# Convert epoch seconds into a Python datetime.datetime object that psycopg2 will
# understand as a date and insert as a PostgreSQL timestamp value.
ts = datetime.datetime.fromtimestamp(d['timestamp'])

cursor.execute("INSERT into tablename columname VALUES (%s)", (ts,) )

Другие советы

ALTER TABLE tablename ALTER COLUMN updated
TYPE bigint USING EXTRACT(EPOCH FROM updated);

The manual:

for date and timestamp values, the number of seconds since 1970-01-01 00:00:00 local time;

EXTRACT returns float8 with up to 6 fractional digits (microsecond resolution), which will be rounded when coerced into bigint. Typically, you'd rather truncate with date_trunc() first, or multiply with 1000000 before the cast to get microseconds instead of seconds (and no rounding losses). See:

Лицензировано под: CC-BY-SA с атрибуция
Не связан с StackOverflow
scroll top