Question

I've very little experience with databases and have run in to a problem with a trigger causing a deadlock in a firebird 2.5 database.

There is two tables in the database . When items are added or removed from the ITEMS table a Trigger updates STATS.ITEMCOUNT & STATS.SIZE. There are 4 triggers in total 2 Incrementing & and two decrementing.

The stats table has a single row and is used to track what in the database. Am I doing this the wrong way? And If not is there a work round.

A deadlock occurs within the first few minutes of starting the application.

UPDATE1: Posted all triggers.

UPDATE2: Posted ExecuteNonQuery Method

UPDATE3: Deadlocks still occur even when using the view method kindly suggested by pilcrow. In fact I even tried using a stored procedure which deadlocked again. Wrapping the select statement in a transaction also failed because the Firebird Ado provider does not support parallel transactions.

public void ExecuteNonQuery(string NonQuery)
        {
            try
            {
                FbCommand FBC = new FbCommand(NonQuery, DBConnection);
                FBC.ExecuteNonQuery();
                FBC.Dispose();
            }
            catch (FbException e)
            {
                Log.FatalException("Database NonQuery Error", e);
            }

        }

        }

Database

** Tables **

CREATE TABLE ITEMS (
    ID              ID NOT NULL /* ID = VARCHAR(36) NOT NULL */,
    EXPIRYTIME      EXPIRYTIME NOT NULL /* EXPIRYTIME = BIGINT NOT NULL */,
    ITEMSIZE        ITEMSIZE /* ITEMSIZE = BIGINT NOT NULL */,
    ACCESSCOUNT     ACCESSCOUNT DEFAULT 1 NOT NULL /* ACCESSCOUNT = INTEGER DEFAULT 1 NOT NULL */,
    LASTACCESSTIME  LASTACCESSTIME /* LASTACCESSTIME = TIMESTAMP NOT NULL */
);


CREATE TABLE STATS (
    INSTANCE            SMALLINT,
    SIZE                BIGINT DEFAULT 0,
    ITEMCOUNT  BIGINT DEFAULT 0,
    HITS       BIGINT DEFAULT 0,
    MISSES     BIGINT DEFAULT 0
);

** Triggers **

   /* Trigger: TRG_INCREMENT_ITEMCOUNT_STATS */
CREATE OR ALTER TRIGGER TRG_INCREMENT_ITEMCOUNT_STATS FOR ITEMS
ACTIVE AFTER INSERT POSITION 1
AS
begin
 UPDATE STATS SET ITEMCOUNT = ITEMCOUNT + 1 WHERE INSTANCE = '0';
end

/* Trigger: TRG_DECREMENT_ITEMCOUNT_STATS */
CREATE OR ALTER TRIGGER TRG_DECREMENT_ITEMCOUNT_STATS FOR ITEMS
ACTIVE AFTER DELETE POSITION 2
AS
begin
UPDATE STATS SET ITEMCOUNT = ITEMCOUNT - 1 WHERE INSTANCE = '0';
end

/* Trigger: TRG_INCREMENT_HITS_STATS */
CREATE OR ALTER TRIGGER TRG_INCREMENT_HITS_STATS FOR ITEMS
ACTIVE AFTER UPDATE POSITION 3
AS
begin
UPDATE STATS SET HITS = HITS + 1 WHERE INSTANCE = '0';
end

/* Trigger: TRG_INCREMENT_SIZE_STATS */
CREATE OR ALTER TRIGGER TRG_INCREMENT_SIZE_STATS FOR ITEMS
ACTIVE AFTER INSERT POSITION 4
AS
BEGIN
  UPDATE STATS SET SIZE = SIZE + NEW.ITEMSIZE WHERE INSTANCE = 0;
END

/* Trigger: TRG_DECREMENT_CACHESIZE_STATS */
CREATE OR ALTER TRIGGER TRG_DECREMENT_CACHESIZE_STATS FOR ITEMS
ACTIVE AFTER DELETE POSITION 5
AS
BEGIN
  UPDATE STATS SET SIZE = SIZE - OLD.ITEMSIZE WHERE INSTANCE = 0;
END
Was it helpful?

Solution 4

Fixed by using

.IsolationLevel = IsolationLevel.ReadUncommitted;

in the connection string.

OTHER TIPS

Deadlock occurs because two threads attempt to update the same row simultaneously. Simpliest solution in this case would be to use single transaction and critical section to prevent simultaneous updates. It would require literally a couple of lines of code.

Other way would require redesign of a table with total information.

You write:

... is there a work round.

There is at least one other approach. Instead of using triggers to precompute aggregated counts and sizes, could you simply compute them on demand in a VIEW like so:

CREATE VIEW stats (instance, size, itemcount, hits, misses) AS
   SELECT CAST(0 AS SMALLINT),
          CAST(COALESCE(SUM(items.itemsize), 0) AS BIGINT),
          CAST(COUNT('x') AS BIGINT),
          CAST(COALESCE(SUM(items.accesscount), 0) AS BIGINT), -- just guessing here
          0                                                    -- but see below
     FROM items;

(Note: I'm guessing that your HITS are the sum of ITEMS.ACCESSCOUNT, as the UPDATE trigger and column names suggest. You don't tell us how you are recording MISSES now, but, if the application is directly incrementing STATS.MISSES at present, you could introduce a new table for that purpose, and then JOIN that table into the VIEW above.)

You will still have to commit transactions appropriately, of course. The STATS view suggested above is only as accurate as completed, committed transactions it can see.

These problems are usually easier solved by using a record per change with a +1 or -1, and occassionaly (daily, weekly) processing the entire table to sum everything up and have only one record again. Next change will then again a +1 or -1 record and you query the sum total.

So you would have something like:

ITEM  COUNT
item1 10
item2 10
item1 1
item2 -1
item2 -1
item1 -1

and after the scheduled merge you get:

ITEM   COUNT
item1  10
item2  8

You can then add a view which simply sums the records per item.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top