Pergunta

I'm currently using Postgres 12 on AWS RDS and am using their default configuration for the database.

I use a gem called PgHero that advises me to use PGTune to improve the efficiency of the database. After inserting my inputs on https://pgtune.leopard.in.ua/

# DB Version: 12
# OS Type: linux
# DB Type: web
# Total Memory (RAM): 7 GB
# CPUs num: 2
# Data Storage: ssd

It gives me the advice to update my DB configuration to:

max_connections = 200  # AWS has: LEAST({DBInstanceClassMemory/9531392},5000)
shared_buffers = 1792MB  # AWS has {DBInstanceClassMemory/32768}
effective_cache_size = 5376MB # AWS has {DBInstanceClassMemory/16384}
maintenance_work_mem = 448MB  # AWS has GREATEST({DBInstanceClassMemory*1024/63963136},65536)
checkpoint_completion_target = 0.7  # AWS has 0.9
wal_buffers = 16MB # AWS is not set
default_statistics_target = 100  # AWS is not set
random_page_cost = 1.1  # AWS is not set
effective_io_concurrency = 200 # AWS is not set
work_mem = 9175kB # AWS is not set
min_wal_size = 1GB  # AWS is 192MB
max_wal_size = 4GB  # AWS is 2048MB
max_worker_processes = 2 # AWS is 8
max_parallel_workers_per_gather = 1 # AWS is not set
max_parallel_workers = 2 # AWS is not set
max_parallel_maintenance_workers = 1 # AWS is not set

I'm not sure whether to use PGTune or just leave RDS defaults? How does one typically determine if there is performance updates? Am I in over my head?

Foi útil?

Solução

The correct configuration depends on your hardware, your database size and your workload, none of which we know.

Tune conservatively at first, setting shared_buffers and work_mem to some reasonable values based on database size and workload, and setting maintenance_work_mem, effective_cache_size, effective_io_concurrency and random_page_cost according to your hardware. Finally, configure logging.

Adjust these and other parameters based on the findings from a realistic load test.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a dba.stackexchange
scroll top