Question

I'm trying to alter a bytea column to have type oid and still retain the values.

I have tried using queries like:

ALTER TABLE mytable ADD COLUMN mycol_tmp oid;
UPDATE mytable SET mycol_tmp = CAST(mycol as oid);
ALTER TABLE mytable DROP COLUMN mycol;
ALTER TABLE mytable RENAME mycol_tmp TO mycol;

But that just gives me the error:

ERROR: cannot cast type bytea to oid

Is there any way to achieve what I want?

Was it helpful?

Solution

A column of type Oid is just a reference to the binary contents which are actually stored in the system's pg_largeobject table. In terms of storage, an Oid a 4 byte integer. On the other hand, a column of type bytea is the actual contents.

To transfer a bytea into a large object, a new large object should be created with the file-like API of large objects: lo_create() to get a new OID, then lo_open() in write mode, then writes with lo_write() or lowrite(), and then lo_close().

This can't reasonably be done with just a cast.

Basically, you would need to write a ~10 lines piece of code in the language of your choice (at least one that supports the large object API, including plpgsql) to do this conversion.

OTHER TIPS

Postgres 9.4 adds a built-in function for this:

lo_from_bytea(loid oid, string bytea)

From the release notes:

  • Add SQL functions to allow [large object reads/writes][12] at arbitrary offsets (Pavel Stehule)

For older versions, this is more efficient than what has been posted before:

CREATE OR REPLACE FUNCTION blob_write(bytea)
  RETURNS oid AS
$func$
DECLARE
   loid oid := lo_create(0);
   lfd   int := lo_open(loid, 131072);  -- = 2^17 = x2000
   -- symbolic constant defined in the header file libpq/libpq-fs.h
   -- #define   INV_WRITE   0x00020000
BEGIN
   PERFORM lowrite(lfd, $1);
   PERFORM lo_close(lfd);
   RETURN loid;
END
$func$  LANGUAGE plpgsql VOLATILE STRICT;

The STRICT modifier is smarter than handling NULL manually.

SQL Fiddle.

More in this related answer:

I think the best answer can be found at Grace Batumbya's Blog, in verbis:

The algorithm is pretty simple, get the binary data, if it is null, return null. Else create a large object and in the lowrite function, pass it the binary value, instead of a path to a file.

The code for the procedure is below. Note that the lo_manage package should be installed for this to work.

create or replace function blob_write(lbytea bytea)
   returns oid
   volatile
   language plpgsql as
$f$
   declare
      loid oid;
      lfd integer;
      lsize integer;
begin
   if(lbytea is null) then
      return null;
   end if;

   loid := lo_create(0);
   lfd := lo_open(loid,131072);
   lsize := lowrite(lfd,lbytea);
   perform lo_close(lfd);
   return loid;
end;
$f$;
CREATE CAST (bytea AS oid) WITH FUNCTION blob_write(bytea) AS ASSIGNMENT;

So now the following code works: CREATE TABLE bytea_to_lo ( lo largeObj );

INSERT INTO bytea_to_lo VALUES ( DECODE('00AB','hex'));

I've tried it and works like a charm.

I am sure its late, but for anybody having the same problem in future.

I also faced a similar issue where I had old data in the columns of text directly in the columns not as OIDs. And when I was trying to use that data with upgraded application I too was getting

I used the knowledge of this thread to solve this issue. I strongly feel that whoever stumbles upon this question would surely like to have a look at this here

To solve the problem, I successfully used blob_write procedure from Grace Batumbya's Blog: http://gbatumbya.wordpress.com/2011/06/.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top