質問

I've been handed a database that's stuck in a weird state. At some indeterminate time in the past, I ended up in a situation where I had duplicate rows in the same table with the same primary key:

=> \d my_table
Table "public.my_table"
       Column       |          Type           | Modifiers 
--------------------+-------------------------+-----------
 id                 | bigint                  | not null
 some_data          | bigint                  | 
 a_string           | character varying(1024) | not null
Indexes:
"my_table_pkey" PRIMARY KEY, btree (id)

=> SELECT id, count(*) FROM my_table GROUP BY id HAVING count(*) > 1 ORDER BY id;
#50-some results, non-consecutive rows.

I have no idea how the database got into this state, but I want to be able to safely get out of it. If, for each duplicated primary key, if I execute a query of the form:

DELETE FROM my_table WHERE id = "a_duplicated_row" LIMIT 1;

Is it only going to delete one row from the table, or is it going to delete both rows with the given primary key?

役に立ちましたか?

解決

Alas, PostgreSQL does not yet implement LIMIT for DELETE or UPDATE. If the rows are indistinguishable in every other way, you will need to carefully use the hidden ctid column to break ties, like discussed here. Or just create the table by selecting distinct tuples from the existing table, and renaming.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top