Question

Let's say that a customer places an order with certain quantity and I want to insert this information into table1 initialy. At the same time, I want to generate a series of rows in the table2 with a combination of the newly generated ID in table1. Has to be executed for every new order. Like:

Table 1 
columns - id ,qty
          A1   25
          A2   32

Generates:

Table 2 
column - sub_id 
          A1_1
          A1_2
          .
          .
          A1_25
          A2_1
          A2_2
          .
          .
          A2_32 
Was it helpful?

Solution

You can achieve your goal easily with after insert trigger. First you need to create a trigger function which will collect information of newly added rows from table1 and will generate and insert a series of IDs into table2.

CREATE OR REPLACE FUNCTION after_insert_table1()
  RETURNS trigger AS
$$
BEGIN
         insert into table2 SELECT CONCAT(new.id,'_',generate_series(1, new.qty)) as sub_id;
    RETURN NEW;
END;
$$
LANGUAGE 'plpgsql';

Then you need to create the trigger on table1 which will call the trigger function.

CREATE TRIGGER InsertIntoTable2 AFTER INSERT 
ON table1
FOR EACH ROW
execute procedure after_insert_table1()

This trigger will insert five rows into table2 with unique sub_id for satement insert into table1 values ('A1',5);

select * from table1

    | id  |  qty |
    |-----|------|
    | A1  |  5   |

select * from table2

sub_id
A1_1
A1_2
A1_3
A1_4
A1_5

OTHER TIPS

IN postgres you can use a CTE and generate_series to build you desrired result

CREATE TABLE table1("id"  VARCHAR(3),"qty" int);
INSERT INTO table1 VALUES('A1',   25),('A2',  32)
CREATE TABLE table2 AS
WITH sub_isbuild AS (
  SELECT CONCAT("id",'_',generate_series(1, "qty")) as sub_id
  FROM table1
)
 SELECT sub_id FROM sub_isbuild;
57 rows affected
SELECT * FROM table2
| sub_id |
| :----- |
| A1_1   |
| A1_2   |
| A1_3   |
| A1_4   |
| A1_5   |
| A1_6   |
| A1_7   |
| A1_8   |
| A1_9   |
| A1_10  |
| A1_11  |
| A1_12  |
| A1_13  |
| A1_14  |
| A1_15  |
| A1_16  |
| A1_17  |
| A1_18  |
| A1_19  |
| A1_20  |
| A1_21  |
| A1_22  |
| A1_23  |
| A1_24  |
| A1_25  |
| A2_1   |
| A2_2   |
| A2_3   |
| A2_4   |
| A2_5   |
| A2_6   |
| A2_7   |
| A2_8   |
| A2_9   |
| A2_10  |
| A2_11  |
| A2_12  |
| A2_13  |
| A2_14  |
| A2_15  |
| A2_16  |
| A2_17  |
| A2_18  |
| A2_19  |
| A2_20  |
| A2_21  |
| A2_22  |
| A2_23  |
| A2_24  |
| A2_25  |
| A2_26  |
| A2_27  |
| A2_28  |
| A2_29  |
| A2_30  |
| A2_31  |
| A2_32  |

db<>fiddle here

Consider two separate columns in "Table 2", like:

CREATE TABLE tbl1 (
  t1_id text PRIMARY KEY
, qty   int NOT NULL CHECK (qty BETWEEN 1 AND 1000)  -- CHECK to prevent abuse
);

CREATE TABLE tbl2 (
  t1_id  text  NOT NULL REFERENCES tbl1              -- optional FK
, sub_id int NOT NULL                                -- separate column !
, PRIMARY KEY(t1_id, sub_id)                         -- optional PK
);
);

Either way, a data-modifying CTE in combination with generate_series() does the trick:

WITH ins1 AS (
   INSERT INTO tbl1(t1_id, qty)
   VALUES ('A1',  3)  -- input row once
   RETURNING *  -- !
   )
INSERT INTO tbl2(t1_id, sub_id)
SELECT i.t1_id, sub_id
FROM   ins1 i, generate_series(1, i.qty) sub_id;

db<>fiddle here

You can still concatenate if you insist (for your original table layout):

WITH ins1 AS (
   INSERT ...
   )
INSERT INTO tbl2(sub_id)
SELECT i.t1_id || sub_id
FROM   ins1 i, generate_series(1, i.qty) sub_id;

But I wouldn't.

OR add a trigger

Trigger function:

CREATE OR REPLACE FUNCTION tbl1_insaft()
  RETURNS trigger
  LANGUAGE plpgsql AS
$func$
BEGIN
   INSERT INTO tbl2(t1_id, sub_id)
   SELECT NEW.t1_id, sub_id
   FROM   generate_series(1, NEW.qty) sub_id;

   RETURN NULL;  -- AFTER trigger can return NULL
END
$func$;

Trigger:

CREATE TRIGGER insaft
AFTER INSERT ON tbl1
FOR EACH ROW EXECUTE PROCEDURE tbl1_insaft();

Then just:

INSERT INTO tbl1(t1_id, qty) VALUES ('A2',  4);

db<>fiddle here

Related:

This is likely best handled in application code since it's a procedural goal you're trying to accomplish, not really a relational one, since the approach likely requires iteration.

That being said, you possibly can accomplish it in a relational way via Dynamic SQL if you had a numbers table you could select the TOP N number of rows from prepended with your id. The Dynamic SQL comes into play as a way to build the query with N = your qty field.

E.g. the query that your Dynamic SQL could generate would look like this (with id and qty filled in):

SELECT CONCAT('A1', '_', Number) AS sub_id
FROM NumbersTable
ORDER BY Number
LIMIT 25
Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top