Question

I have a PHP application which is called with GET parameters running with PG and nginx. The page A receives a message and some information. The page B makes routing. The page C calls an external application with CURL. I will receive up to 1-2 millions requests per month when I'll go in production.

My question concerns the pg_pconnect function. Is the connection reused if there is calls from different location? I mean, is it better to make a simple connection and close it everytime for page A? (servers from different locations will call my app) For page B and C, a script will call them in a infinite loop (waiting 10s if there is no message to handle). Since requests will always come from the same location, is it worthy to use a permanent connection for page B & C?

I hope my explanation is clear enough.

Thanks!

Was it helpful?

Solution

In general, I think in your case, you are likely to see some benefits from persistent connections. There are also drawbacks, but these are manageable as long as you keep them in mind. You may, however, want to go further and consider an actual connection pooler.

The big issue is that typically PostgreSQL does best when concurrent connections are under approx 2 per CPU core plus one per disk spindle (due to I/O wait time). This isn't exact of course but it gives you an idea given your hardware and resources what to expect.

The connection startup/teardown overhead is not that much on Linux/UNIX platforms, but managing concurrency may be critical to keeping things running fast. So I would start out with persistent connections and then move to a connection pooler if I needed some additional control there.

There major disadvantages are that there are certain database operations which you cannot do when others are connected to the database. If you ever need to restore from backup, you may need to make sure you disconnect the web app first.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top