Question

Our setup is a bit complex, so I will try to simplify what our tables look like but still try to describe the real problem:

We have a large table (items) with about 500 million rows (grows with ~1.5million rows per day). Where each item has a timestamp (among other things). We have 5 metadata tables (based on the item type) with FKs to items. These have about 100 million rows each.

When doing queries on the items table, we need to filter and sort on values in the metadata tables. This is based on filter parameters in our UI, so we might filter on things from all 5 metadata tables, or from none of them. The query will look something like this:

SELECT 
    i.id
FROM items i
LEFT JOIN itemmetadata1 im1 ON im1.itemid = i.id AND i.type = 1
LEFT JOIN itemmetadata2 im1 ON im2.itemid = i.id AND i.type = 2
LEFT JOIN itemmetadata3 im1 ON im3.itemid = i.id AND i.type = 3
LEFT JOIN itemmetadata4 im1 ON im4.itemid = i.id AND i.type = 4
LEFT JOIN itemmetadata5 im1 ON im5.itemid = i.id AND i.type = 5
WHERE
    i.timestamp > '2019-02-01'
    AND i.timestamp < '2019-02-07'
    -- These aren't always here, query is dynamically generated (based on user input)
    AND im1.somevalue = TRUE
    AND im3.anothervalue > 5
    ... etc
ORDER BY 
    -- This can also change dynamically
    im5.value DESC

This makes this query quite slow for some cases. For example when looking at a long period of time, with few matching rows in the metadata tables.

So I have two questions:

1. Aggregation

Would it make sense to create a new denormalized table called aggregateditems that contains all of the item and metadata columns we need to filter and sort on? That way our query would be simplified like this:

SELECT 
    i.id
FROM aggregateditems ai
WHERE
    ai.timestamp > '2019-02-01'
    AND ai.timestamp < '2019-02-07'
    -- These aren't always here, query is dynamically generated (based on user input)
    AND ai.metadata1somevalue = TRUE
    AND ai.metadata3anothervalue > 5
    ... etc
ORDER BY
    -- This can also change dynamically
    ai.metadata5value DESC

I guess this would speed things up considerably with the right indexes.

So my thought is to put triggers on items and the metadata tables, that will update aggregateditems when the data changes, is that a good idea? Or might this kill performance?

2. Partitioning

Creating an aggregated table will speed up queries I think, but we'd still have very large tables, with large indexes, so I guess we'd need to do some partitioning as well.

Let's say we partition items and aggregateditems on timestamp, how do we partion the metadata tables? (They don't have any timestamps).

Was it helpful?

Solution

I would recommend trying a single-table design. There would be groups of columns which are only relevant to a single type, meaning they would be NULL for rows of any of the other types. For the physical layout of the table, you would probably want the common columns first, then the frequently filtered-upon type-specific columns, then the infrequently/never filtered-upon type-specific columns last.

This will get rid of the problem of needing to do left joins. It will also make it easy to partition on timestamps, as you no longer have the side tables which lack timestamps. Hopefully the large text fields will get TOASTed aggressively, keeping the size of the main table down (but this is not something easy to control).

If the partitions with the most recent timestamps are the hottest ones, this will help keep their data and their indexes small enough to fit RAM cache, and hot enough to be naturally kept there.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top