Question

Postgre supports this operation as below:

ALTER TABLE name
    SET SCHEMA new_schema

The operation won't work in Redshift. Is there any way to do that?

I tried to update pg_class to set relnamespace(schema id) for the table, which needs superuser account and usecatupd is true in pg_shadow table. But I got permission denied error. The only account who can modify pg system table is rdsdb.

server=# select * from pg_user;
  usename   | usesysid | usecreatedb | usesuper | usecatupd |  passwd  | valuntil |            useconfig             
------------+----------+-------------+----------+-----------+----------+----------+----------------------------------
 rdsdb      |        1 | t           | t        | t         | ******** |          | 
 myuser     |      100 | t           | t        | f         | ******** |          | 

So really redshift gives no permission for that?

Was it helpful?

Solution 2

The best way to do that is to create a new table with the desired schema, and after that do an INSERT .... SELECT with the data from the old table.

Then drop your current table and rename the new one with ALTER TABLE.

OTHER TIPS

Quickest way to do this now is as follows:

CREATE TABLE my_new_schema.my_table (LIKE my_old_schema.my_table);
ALTER TABLE my_new_schema.my_table APPEND FROM my_old_schema.my_table;
DROP TABLE my_old_schema.my_table;

The data for my_old_schema.my_table is simply remapped to belong to my_new_schema.my_table in this case. Much faster than doing an INSERT INTO.

Important note: "After data is successfully appended to the target table, the source table is empty" (from AWS docs on ALTER TABLE APPEND), so be careful to run the ALTER statement only once!

Note that you may have to drop and recreate any views that depend on my_old_schema.my_table. UPDATE: If you do this regularly you should create your views using WITH NO SCHEMA BINDING and they will continue to point at the correct table without having to be recreated.

You can create a new table with

CREATE TABLE schema1.tableName( LIKE schema2.tableName INCLUDING DEFAULTS ) ;

and then copy the contents of table from one schema to another using the INSERT INTO statement

followed by DROP TABLE to delete the table.

This is how i do it.

-- Drop if you have already one backup

DROP TABLE IF EXISTS TABLE_NAME_BKP CASCADE;

-- Create two back up, one to work and will be deleted at the end, and one more is real backup

SELECT * INTO TABLE_NAME_BKP FROM TABLE_NAME;
SELECT * INTO TABLE_NAME_4_WORK FROM TABLE_NAME;

--We can do also do the below ALTER, but this holds primary key constraint name, hence you cant create new table with same constraints names

ALTER TABLE TABLE_NAME RENAME TO TABLE_NAME_4_WORK;

-- Ensure you have copied

SELECT COUNT(*) FROM TABLE_NAME;
SELECT COUNT(*) FROM TABLE_NAME_4_WORK; 

-- Create the new table schema

    DROP TABLE IF EXISTS TABLE_NAME CASCADE;

    CREATE TABLE TABLE_NAME     (
       ID           varchar(36)     NOT NULL,
       OLD_COLUMN   varchar(36),
       NEW COLUMN_1 varchar(36)
    )
    compound sortkey (ID, OLD_COLUMN, NEW COLUMN_1);

    ALTER TABLE TABLE_NAME
    ADD CONSTRAINT PK__TAB_NAME__ID
    PRIMARY KEY (id);

-- copy data from old to new

INSERT INTO TABLE_NAME (
     id,    
     OLD_COLUMN)
 (SELECT     
     id,        
     OLD_COLUMN FROM TABLE_NAME_4_WORK) 

-- Drop the work table TABLE_NAME_4_WORK

 DROP TABLE TABLE_NAME_4_WORK;

-- COMPARE BKP AND NEW TABLE ROWS, AND KEEP BKP TABLE FOR SOMETIME.

 SELECT COUNT(*) FROM TABLE_NAME_BKP;

 SELECT COUNT(*) FROM TABLE_NAME;
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top