Queueing MySQL record inserts to avoid over-subscription of a related resource … table locking?

dba.stackexchange https://dba.stackexchange.com/questions/285699

  •  16-03-2021
  •  | 
  •  

Question

Given a simplified hypothetical of seats in a lifeboat, if I have the following setup with a lifeboats table and a seats table where each record is one occupied seat in the given lifeboat:

CREATE TABLE lifeboats (
  id INT UNSIGNED NOT NULL,
  total_seats TINYINT UNSIGNED NOT NULL,
  PRIMARY KEY (id));

INSERT INTO lifeboats (id, total_seats) VALUES (1, 3);
INSERT INTO lifeboats (id, total_seats) VALUES (2, 5);

CREATE TABLE seats (
  lifeboat_id INT UNSIGNED NOT NULL);

INSERT INTO seats (lifeboat_id) VALUES (1);
INSERT INTO seats (lifeboat_id) VALUES (1);
INSERT INTO seats (lifeboat_id) VALUES (1);
INSERT INTO seats (lifeboat_id) VALUES (2);

I can find lifeboats with available seats by querying:

SELECT 
    l.id, l.total_seats, COUNT(s.lifeboat_id) AS seats_taken
FROM
    lifeboats AS l
        LEFT JOIN
    seats AS s ON s.lifeboat_id = l.id
GROUP BY l.id
HAVING COUNT(s.lifeboat_id) < l.total_seats

What is the best way to ensure 2 clients do not grab the last seat in a lifeboat without implementing some coordinating process queue?

My only idea (assuming I'm trying to grab seat in lifeboat 2) is going LOCK TABLE rambo like:

LOCK TABLE seats WRITE, lifeboats AS l READ, seats AS s READ;

INSERT INTO seats (lifeboat_id)
SELECT 
    id
FROM
    (SELECT 
        l.id, l.total_seats, COUNT(s.lifeboat_id) AS seats_taken
    FROM
        lifeboats AS l
    LEFT JOIN seats AS s ON s.lifeboat_id = l.id
    WHERE l.id = 2
    GROUP BY l.id
    HAVING COUNT(s.lifeboat_id) < l.total_seats) AS still_available;

UNLOCK TABLES;

but this is not very elegant, needless to say.

(My environment is MySQL8/InnoDB)

UPDATE ... Another go:

I've been called out for giving a bad example. The question is really just:

For a given table, how would you best limit (to X) the number of records inserted with a given value Y?

The process receives the limit X & value Y , you query the existing records where value = Y to see if you are under the limit X or not, and if so you insert the record.

But obviously you risk 2 people grabbing the "last" record unless you ... do something .... but what? (I thought the lifeboat analogy was actually a good one!)

Idea one: Write lock the table before beginning the process. Other processes forced to wait. But this stops everybody ... including others with a different value Y.

Idea/Question 2: If I have a 2nd table t2 with unique set of all the Y values and my select "count of Y" query includes t2 reference + a "FOR UPDATE OF t2", will the write lock placed on the Y row in t2 effectively force processes with value=Y to wait until others have completed the process?

Was it helpful?

Solution

SELECT FOR UPDATE can't lock rows that don't exist, therefore if it is used on table seats, it cannot prevent the insertion of new rows. So if you use that, it has to be done on table lifeboats.

Since customers will be in a hurry to find a lifeboat with available seats, it would be better to let go of the slow count() and add a column occupied_seats in table lifeboats.

This should be updated via a before insert or delete trigger on seats, that does an update on the corresponding row from lifeboat. This update will automatically lock the row, which handles your concurrency issues.

This trigger should raise an error (use SIGNAL sql keyword) if the insert would exceed available seats.

This can be done in a semi lock-free manner by doing

UPDATE lifeboats SET occupied_seats = occupied_seats+1 
WHERE id=... AND occupied_seats < available_seats;

then testing if that did update one row. That will save one SELECT FOR UPDATE. If it did not update the row, the lifeboat is full, throw an error. If it did, this also locks the row, so COMMIT quickly.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top