Optimal way to ignore duplicate inserts? [duplicate]
-
31-10-2019 - |
题
This question already has an answer here:
Background
This problem relates to ignoring duplicate inserts using PostgreSQL 9.2 or greater. The reason I ask is because of this code:
-- Ignores duplicates.
INSERT INTO
db_table (tbl_column_1, tbl_column_2)
VALUES (
SELECT
unnseted_column,
param_association
FROM
unnest( param_array_ids ) AS unnested_column
);
The code is unencumbered by checks for existing values. (In this particular situation, the user does not care about errors from inserting duplicates -- the insertion should "just work".) Adding code in this situation to explicitly test for duplicates imparts complications.
Problem
In PostgreSQL, I have found a few ways to ignore duplicate inserts.
Ignore Duplicates #1
Create a transaction that catches unique constraint violations, taking no action:
BEGIN
INSERT INTO db_table (tbl_column) VALUES (v_tbl_column);
EXCEPTION WHEN unique_violation THEN
-- Ignore duplicate inserts.
END;
Ignore Duplicates #2
Create a rule to ignore duplicates on a given table:
CREATE OR REPLACE RULE db_table_ignore_duplicate_inserts AS
ON INSERT TO db_table
WHERE (EXISTS ( SELECT 1
FROM db_table
WHERE db_table.tbl_column = NEW.tbl_column)) DO INSTEAD NOTHING;
Questions
My questions are mostly academic:
- What method is most efficient?
- What method is most maintainable, and why?
- What is the standard way to ignore insert duplication errors with PostgreSQL?
- Is there a technically more efficient way to ignore duplicate inserts; if so, what is it?
Thank you!
没有正确的解决方案
不隶属于 dba.stackexchange