Question

In our environment we have some servers that are in an Always On Availability Group, and some that are standalone.

We normally backup to a network share, but we have recently observed that as the databases are growing bigger the time taken is getting longer, which slows down the whole network.

Ola hallengren's script is being used with compression and also splitting the backup files. I am only performing daily "full" backups. The backups are going to the network share EMC isilon drive.

I am never comfortable with EMC DD Boost. The only alternative is to do a local backup and then copy to the same network share.

Is there an efficient way other than the above?

Was it helpful?

Solution

The alternative you mentioned seems to be the best choice.

What you can do is a 2 step process :

  • Take native sql server backups with compression using Ola's backup solution locally.
  • Use Robocopy to do the transfers to a network share. This is decoupled and can run as a Windows scheduled task.

This way, your backups are local and they will be fast. You will need more disk space and obviously redundancy (what if the backup disk fails - you don't want to lose all your backups).

Alternatively, as recommended by Max Vernon, do the Robocopy as a step in the backup job to ensure the robocopy only occurs if the backup is successfully completed, and as soon as possible after the backup is complete. The backup is at the same risk as the data, as long as it stays local.

Also, regularly test your restores since if you cannot restore a backup - what purpose does it serve!

Also, refer to my answer to SQL Backup tuning large databases

OTHER TIPS

There are ways to tune backups by messing with different knobs like MAXTRANSFERSIZE or BUFFERCOUNT, or striping the file (which you've noted you're already doing).

The problem is that touching those knobs may still result in hitting the limits of your network and/or storage, and them not having any real impact on backup time.

Your first job should be to benchmark the storage you're backup up to using Crystal Disk Mark or DiskSpd. That'll give you some idea of how fast you can expect writes to be at their best.

The next thing you need to test is reads from the drives you're backing up from. If you run a backup to NUL, you can time how long it takes just the read portion of your backup, without having to write it to disk.

With both those numbers in mind, you can start messing with other knobs to see which ones get you closest to them, regardless of if your backup target is local or networked.

A couple of potential solutions:

  1. Going from full-only to a weekly full backup and nightly differential can be an easy solution.
  2. There are a number of performance-related parameters that you can tweak in Ola's scripts, you might be able to tweak these to get the performance that you want:

    • BlockSize
      Specify the physical blocksize in bytes.

      The BlockSize option in DatabaseBackup uses the BLOCKSIZE option in the SQL Server BACKUP command.

    • BufferCount
      Specify the number of I/O buffers to be used for the backup operation.

      The BufferCount option in DatabaseBackup uses the BUFFERCOUNT option in the SQL Server BACKUP command.

    • MaxTransferSize Specify the largest unit of transfer, in bytes, to be used between SQL Server and the backup media.

      The MaxTransferSize option in DatabaseBackup uses the MAXTRANSFERSIZE option in the SQL Server BACKUP command.

There are many possible options, but as databases get larger and full backups take longer, you will likely have to incorporate differential backups, if you haven't already:

Creating a differential backups can be very fast compared to creating a full backup. A differential backup records only the data that has changed since the full backup upon the differential backup is based. This facilitates taking frequent data backups, which decrease the risk of data loss.

My understanding is that Ola's scripts can even be set to decide between a full or differential backup based on the amount of change in the database using the ModificationLevel parameter.

We use EMC DD Boost, and you're welcome to your own opinion of it, but we have found, due to the client-side de-duplication methods it uses, that full backups of even multi-TB databases can be very fast, to the point that we don't have to worry about SQL Server differential backups. In effect by using EMC DD you are doing differential backups, just not in SQL Server. Using multiple destination files also greatly improves speed, even on DDBoost.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top