Question

I have a management application of sales, stock and payment on a warehouse whole saler from a web interface. In particular, when a order is effectuated it must create a line corresponding to each product ordered with the respective quantity. The validation of the stock availability is done at the moment of the order.

Considering the following two ways of validation on order:

  1. Use a trigger BEFORE INSERT on the table OrderLine, that does a SELECT on Product verifying there is enough stock.

  2. Do a SELECT ... FROM OrderLine JOIN Product WHERE quant < stock.

My question is: Which of these two alternatives is preferable, and why / for what scenario?

Was it helpful?

Solution

I only see you doing a SELECT in both variants. If you want to make sure you don't sell more than you have in store (stock), you must decrease your stock in the same transaction you place the order. In PostgreSQL 9.1 you could use a data-modifying CTE for the job:

WITH u AS (
   UPDATE product SET quant = quant - <put_order_quant_here>
   WHERE  product_id = <order_prod_id>
   AND    quant >= <put_order_quant_here>
   RETURNING product_id, <put_order_quant_here> AS quant
   )
INSERT INTO order_detail (order_id, product_id, quant)
SELECT <put_order_id_here>, product_id, quant
FROM   u;

The UPDATE in the CTE only returns values if the product has sufficient stock. IN this case, the quantity is reduced in the same transaction, just before the order is placed.

Put all order-details into one transaction, if any of them fails to INSERT, ROLLBACK.


Possible deadlocks

One more piece of advice: this scenario could easily lead to deadlocks. Say, you have two orders coming in at the same time, both want product A and B. The first order starts by placing the order_detail on A, the second starts with B. Then the two transactions block each other out. Each of them would wait for the other to complete. A deadlock ensues.

In PostgreSQL a transaction will wait for some time when it is stalled by locks. Depending on your setting of deadlock_timeout (default is 1s, which I set to at least 5s on untroubled production servers), checks for a possible deadlock condition will be performed.

Once detected, one transaction will be aborted and report a deadlock exception. The other one can finish. Which one is hard to predict.

There is a simple way to avoid this kind of deadlocks: Always place your order_details in a consistent order. Like products ordered by product_id. This way, the above scenario can never happen.

OTHER TIPS

Update:

I made this post before the postgresql tag was added, but the concepts should be similar. Basically you need to be able to lock the record in some way to ensure that the stock value you select remains the stock value you decrease. The same transaction that locks the record should update the stock and insert into the OrderLine table. As soon as you determine that stock isn't available roll the whole transaction back.

If this processing is done in a batch operation by one process then you don't need to be concerned with locking individual records, but you should ensure that only one batch process can be running at the same time.


Look into...

  1. The WAIT clause of the SELECT statement.
  2. The BULK COLLECT clause of the SELECT statement.
  3. The FORALL statement.

You probably also would want to ensure that all entries had enough stock, so you may want some to aggregate whether the stock is available or not.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top