Question

I'm working on defining specs for a new server to run DBCC CHECKDB scans on all databases in our portfolio (85 instances, 1300 DBs, 15 TB allocated). I have a custom application that copies dump files to a single server, then performs the restore and DBCC and required logging for reporting, but sequential I/O throughput is my bottleneck (I was working with set of RAID 6 LUNs from our SAN using 7.2K spindles, getting 300 MB/s read but only 50 MB/s write).

Is there a whitepaper or other resource available to help spec out a server for Maximum Consumption Rate specifically for DBCC? I looked at the MSFT whitepaper for Fast Track Data Warehouse servers, but I'm not sure it's the right fit.

Thinking outside the box, I'm contemplating going to a DAS enclosure like PowerVault and using RAID 0. The system DBs will be on RAID 1 on local storage but using RAID 0 for the restore and DBCC work makes sense because as soon as I finish testing a DB I drop the DB to make room for the next one. I can keep a few spindles as hot spares to rebuild a down array if needed and pick up where I leave off.

Any suggestions would be deeply appreciated!

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top