Question

I'm using Microsoft SQL Server 2016 (SP2-CU12) (KB4536648) - 13.0.5698.0 (X64) Feb 15 2020 01:47:30 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows Server 2012 R2 Standard 6.3 (Build 9600: )

I have some very large tables (the tables by themselves can be more than 700gb) on a database that's 2.5 terabytes.

I've been using some scripts for index maintenance. Right now I'm doing :

  • Reorg if fragmentation is > 10%
  • Rebuild if fragmentation is > 50%.

It works fine most of the time but for my tables that are 700gb or larger, I think that 10% will be a hard percentage to hit. I was wondering if I should lower that percentage for those big tables or if it's fine to leave it at that threshold.

I have to add that my servers have SSD drivers, but only 128 gb of RAM because it's Standard edition. So there's no way that the tables can fit in RAM, in case of large queries (like reporting), it has to scan/seek the indexes.

I don't want to hit the 665 error file system limitation due to fragmentation. https://docs.microsoft.com/en-ie/troubleshoot/sql/admin/1450-and-665-errors-running-dbcc (I'm starting to add filegroups to the database to avoid that error, as I'm having it once in a while on those large tables that are part of the primary file group.)

So should I do a custom percentage for index maintenance on very large tables? Or it should be fine?

Était-ce utile?

La solution

It really depends on how transactional your tables are. If they are frequently being updated and inserted into, you'll definitely hit the 10%. If you're not hitting the 10% then it's probably not even worth it to lower to 5% because that means your tables are fairly well unfragmented already.

I've personally worked on a very similarly sized database to your example, also SQL Server 2016 standard, on a server that only had 16 GB of RAM. Our database was fairly transactional (adding about 1,000 new records every few minutes) and we were hitting the 10% threshold for reorgs every night. Because the operation was so heavy and took a while, I actually did the opposite and increased the fragmentation threshold for reorgs to 20% for our larger Tables' indexes. This proved a lot better for our business because the different in performance was negligible and it generally allowed for the larger tables to not be reorganized until the weekend.

Worrying about index reorgs, and even more so rebuilds, is on the low end of things to optimize for. Even Brent Ozar says not to worry about it. :)

Autres conseils

While I agree with what @BrentOzar says I also agree that this is a particular scenario. The question is pertinent because you don't usually deal with a 700Gb table.

In my opinion you should:

  1. Partition the table because is too big. That is exactly the reason why Microsoft created this feature. Plus you are on SQL Server 2016. Lucky you!
  2. Now that the table is partitioned you should write a logic that check every partition and decide to reorganize if > 10% or rebuild if > 50%
  3. Now that the logic is written you should share such logic with the community and release it open source because you are not the only one facing this particular challenge : Rebuilding partitioned table clustered indexes

That's why I find your question very pertinent. I believe we are going to a World where more and more databases will have, sooner or later, terabytes of tables.

The data are growing faster than our adaptation to it and we don't have the tools to deal with such issue.

Licencié sous: CC-BY-SA avec attribution
Non affilié à dba.stackexchange
scroll top