Question

We are telling our client to put a SQL Server database file (mdf), on a different physical drive than the transaction log file (ldf). The tech company (hired by our client) wanted to put the transaction log on a slower (e.g. cheaper) drive than the database drive, because with transaction logs, you are just sequencially writing to the log file.

I told them that I thought that the drive (actually a RAID configuration) needed to be on a fast drive as well, because every data changing call to the database, needs be saved there, as well as to the database itself.

After saying that though, I realized I was not entirely sure about that. Does the speed of the transaction log drive make a significant difference in performance... if the drive with the database is fast?

Was it helpful?

Solution

The speed of the log drive is the most critical factor for a write intensive database. No updates can occur faster than the log can be written, so your drive must support your maximum update rate experienced at a spike. And all updates generate log. Database file (MDF/NDF) updates can afford slower rates of write because of two factors

  • data updates are written out lazily and flushed on checkpoint. This means that an update spike can be amortized over the average drive throughput
  • multiple updates can accumulate on a single page and thus will need one single write

So you are right that the log throughput is critical.

But at the same time, log writes have a specific pattern of sequential writes: log is always appended at the end. All mechanical drives have a much higher throughput, for both reads and writes, for sequential operations, since they involve less physical movement of the disk heads. So is also true what your ops guys say that a slower drive can offer in fact sufficient throughput.

But all these come with some big warnings:

  • the slower drive (or RAID combination) must truly offer high sequential throughput
  • the drive must see log writes from one and only one database, and nothing else. Any other operation that could interfere with the current disk head position will damage your write throughput and result in slower database performance
  • the log must be only write, and not read. Keep in mind that certain components need to read from the log, and thus they will move the disk mechanics to other positions so they can read back the previously written log:
    • transactional replication
    • database mirroring
    • log backup

OTHER TIPS

In simplistic terms, if you are talking about an OLTP database, your throughput is determined by the speed of your writes to the Transaction Log. Once this performance ceiling is hit, all other dependant actions must wait on the commit to log to complete.

This is a VERY simplistic take on the internals of the Transaction Log, to which entire books are dedicated, but the rudimentary point remains.

Now if the storage system you are working with can provide the IOPS that you require to support both your Transaction Log and Database data files together then a shared drive/LUN would provide adequately for your needs.

To provide you with a specific recommended course of action I would need to know more about your database workload and the performance you require your database server to deliver.

Get your hands on the title SQL Server 2008 Internals to get a thorough look into the internals of the SQL Server transaction log, it's one of the best SQL Server titles out there and it will pay for itself in minutes from the value you gain from reading.

Well, the transaction log is the main structure that provides ACID, can be a big bottleneck for performance, and if you do backups regularly its required space has an upper limit, so i would put it in a safe, fast drive with just space enough + a bit of margin.

The Transaction log should be on the fastest drives, if it just can complete the write to the log it can do the rest of the transaction in memory and let it hit disk later.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top