Question

I'm investigating strategies to reduce the maintenance downtime on a critical database. The DB contains bioinformatics data and is accessed by users in many different time zones around the world, 7 days a week (so off peak hours are limited). The database contains 10's of millions of rows and is growing rapidly.

As we are planning to upgrade to pg9, I want to find out if I can perform backups on a slave, so the master isn't affected. I am wondering if I should be very concerned about the slave getting too far behind on the log when a backup is in progress?

Was it helpful?

Solution

If your database is too big or backups are too slow, you should be using WAL archiving as a backup method. You don't need PostgreSQL 9.0 for that. Having WAL archiving is a prerequisite for WAL-based replication, so you'd almost get it for free if you are interested in the replication feature in 9.0.

OTHER TIPS

There is no downtime during backup. Why do you think the database is down?

From the manual:

pg_dump does not block other users accessing the database (readers or writers).

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top