Question

I have a "large" PostgreSQL (9.6 with timescale plugin) database (about 1Tb). And would like to migrate it to a Linux machine. The naive approach would be pg_dump and pg_restore. (Not sure how long that might take on a powerful machine with SSD on both sides, maybe you have an idea?)

Just wanted to ask if this is the way to go or if there might be a better more convenient solution. What I saw so far were pgBackRestore and barman.

I'm asking this question since I already had some trouble migrating a MongoDB database which took really very long. So now I would like to avoid errors : )

Thanks a lot for any hint!

Was it helpful?

Solution

With the directory format (-F d) of pg_dump, you can parallelize dump and restore, which will help if there are several large tables.

OTHER TIPS

You should be able to setup a streaming replica on your Linux machine. Then promote it. Use pg_basebackup to take the initial copy, but you can do this while the master is running. Promoting the replica to be the new master will only take seconds, if that.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top