Question

Have a logshipped VLDB and don't run DBCC CHECKDB on primary due to the long running periods of high CPU and tempdb usage. Have read about DBCC CHECKTABLE as a possibility for a more granular data integrity checking option. Does DBCC CHECKTABLE have overhead levels similar to DBCC CHECKDB? Can it be run during business hours without adversely affecting server CPU and TempDB performance?

Was it helpful?

Solution

The code behind CHECKTABLE is indeed likely the vast amount of time spent by CHECDB, as mentioned by @Mo64. I.e., the "sum of CHECKTABLE" should roughly be the same as a CHECKDB, one can imagine.

However, by default CHECKTABLE uses a database snapshot. Not a "table snapshot", since there is no such concept. I.e., you can check one table and if modification occurs on some other table at the same time, the original for that data need to be saved in the snapshot file first. Here is one aspect where the "sum of CHECKTABLE" can exceed one CHECKDB.

Anyhow, your question is about overhead. Yes, pretty much the same (or rather, it won't be (much) less for CHECKTABLE. You have the database snapshot and copy-on-write for the whole database (although the snapshot would live for a shorted time when you check only one table). And SQL server should need space in tempdb as well.

Above is from theoretical reasoning, I should add.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top