What should be the strategy to read from many tables having millions rows each in postgresql?

StackOverflow https://stackoverflow.com/questions/21107579

  •  27-09-2022
  •  | 
  •  

문제

I have following scenario while using postgresql - No of tables - 100 , No of rows per table - ~ 10 Million . All the tables have same schema E.g. each table contains daily call records of a company. So 100 tables contain call records of 100 days.

I want to make following type of queries on these tables - For each column of each table get count of records having null value in that column.

So considering above scenario, what can be the major optimizations in table structures ? How should i prepare my query and does there exist any efficient way of querying for such cases

도움이 되었습니까?

해결책

If you're using Postgres table inheritance, a simple select count(*) from calls where foo is null will work fine. It will use an index on foo provided null foo rows aren't too common.

Internally, that will do what you'd do manually without table inheritance, i.e. union all the result for each individual child table.

If you need to run this repeatedly, maintain the count in memcached or in another table.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top