문제

We have a table with 200k rows where we change some flags multiple times per day. If i have understood correctly an UPDATE on postgres is a DELETE+INSERT on disk.

I'm wondering if this is highly inefficient for our use case. What if each tuple has a big size? Are the entire tuples written again?

I was thinking to move those status flags on a separate table so that we rewrite only small tuples reducing useless i/o. Is this a correct approach or i'm on the wrong track?

도움이 되었습니까?

해결책

Community wiki answer:

You're basically right.

It will also help if you define your separate tables with a certain amount of free space (see fillfactor) so that PostgreSQL can perform Heap-only tuples updates.

See Increase the speed of UPDATE query using HOT UPDATE (Heap only tuple) by Anvesh Patel.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 dba.stackexchange
scroll top