Question

I working on a server with Ubuntu, 320 GB SSD, 6 cores and 16GB Ram, but Postgres is having issues some queries that take very long time to run, and with parallel queries, that are increasing the server load a lot.

Some Postgres conf:

max_connections = 150
shared_buffers = 4GB
effective_cache_size = 12GB
work_mem = 109MB
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.7
wal_buffers = 16MB
effective_io_concurrency = 200

seq_page_cost = 1
random_page_cost = 1.1
cpu_index_tuple_cost = 0.030
cpu_operator_cost = 0.0150
cpu_tuple_cost = 0.06

#parallel_tuple_cost = 0.1      # same scale as above
#parallel_setup_cost = 1000.0   # same scale as above
#min_parallel_table_scan_size = 8MB
#min_parallel_index_scan_size = 512kB
max_worker_processes = 6
max_parallel_workers_per_gather = 3
max_parallel_workers = 6

The server is on Linode and i get some info:

#Larger bs
$ dd bs=16k count=10240 iflag=direct if=./test_file of=/dev/null;
167772160 bytes (168 MB, 160 MiB) copied, 1,01442 s, 165 MB/s

#Smaller bs
$ dd bs=2048 count=80000 iflag=direct if=./arquivo_teste of=/dev/null;
163840000 bytes (164 MB, 156 MiB) copied, 9,91975 s, 16,5 MB/s

lsblk -o NAME,MOUNTPOINT,MODEL,ROTA
sda / QEMU HARDDISK 1

Linux myClientName 5.7.6-x86_64-linode136 #1 SMP PREEMPT Wed Jun 24 15:41:07 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux

Not sure if the dd test will help, but i find interesting how large is the difference between the small BS and the big BS tests.

If i not wrong QEMU is vitualized system, and the HD may still be and SSD despite the ROTA = 1

  • Is there any way to be sure how many parallel workers can I have on a Postgres server ?
Was it helpful?

Solution

The limit of concurrent parallel workers for the whole cluster is max_parallel_workers, which must be ≤ max_worker_processes. The limit of parallel workers per query is max_parallel_workers_per_gather.

Adjusting these parameters, you can start as many workers as you please. Just keep in mind that performance will degrade once you are using more parallel workers than your CPU and I/O can handle. Also, don't forget that there may be other queries running, so don't over-allocate resources.

Don't forget alternative approaches: perhaps the query can be rewritten or indexed to be faster.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top