Question

Our software product currently ships on Windows 7 with Postgres 8.3 as its database. On a busy site, there may be 24 automated systems generating 100 rows (x100 columns) per minute, with 3-10 human clients viewing subsets of around 1000 rows – all retrieved at once, with incremental updates querying pk + timestamp every minute or so and retrieving new rows that are pertinent. There are a few auxiliary tables, but this table has the primary activity.

As a first step to a limited multi-master system (to help with geographically separated teams), we implemented an upgrade to 9.3. Performance wasn't the first priority, so it wasn't really profiled. As release time has come, management has decided to drop 9.3 for now citing fear of possible performance degradation and lack of testing resources. I was sure the performance issue was preposterous so I did some PgBench testing.

Using 9.3's pgbench, I alternated between connecting to the local 8.3 and 9.3 installations (diff't port numbers). I've captured my results in this google drive spreadsheet, but the summary is that generally 8.3 beat 9.3. 9.3 only won in raw insert performance.

We have some customization of our postgresql.conf files which I generally kept from 8.3 to 9.3, I'll list the non-default settings

max_connections = 1000
shared_buffers = 320MB
temp_buffers = 80MB
max_prepared_transactions = 50    #8.3 only, 9.3 left at 0 (not sure why)
max_fsm_pages = 204800            #8.3 only, 9.3 doesn't have setting
autovacuum_max_workers = 30

So, is this just the price of progress, or is there something I should be doing in 9.3 to make it excel?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top