Pergunta

Closely related to this question from a few weeks ago, but the answers didn't discuss how to do the dirty deed. Here's the situation:

We now 15 GB data in a database, but a logging bug in an application ran wild and ran this up to 80GB data giving us DB files around 130GB. We've fixed the bug, cleared out the affected tables, and I'd like to get some of the space back, maybe bring the DB back down to 40GB or so.

The biggest reason I'd like to do this is so that we can more easily restore backups to smaller drives on virtual test instances.

I get it: Shrink is evil, and will fragment both indexes and on-disk file. I'm sold. This is a one time event.

So how can I minimize the pain? Seems like I should:

  1. Use DBCC SHRINKFILE (DataFile1, 40000); to aim for 40 GB.
  2. Immediately use some smart reindexing to reorganize and rebuild indexes
  3. Defrag physical disks

Will that appropriately mitigate the side effects of a shrink? Or is this only going to end up on my application for the Evil League of Evil?

Foi útil?

Solução

Here is my somewhat hand-wavy approach to shrinking files. This has worked pretty well for me for over ten years.

Shrinking files is not necessarily evil, but that doesn't mean that it is always the best thing to do.

Firstly, think about why you are shrinking. Most databases will only grow larger. If you expect to need the space in the foreseeable future, you probably don't want to shrink.

The best reason to shrink is that you are trying to recover from some mistake, where the size of the data file has grown well past anything that is required in the near future.

Another good reason is that you need to restore the affected database onto development or test servers that simply do not have the storage capacity to handle the extra, unused, space.

The worst reason to shrink is that you are trying to jam yet another database onto storage that is already nearly full. You are going to run out of space, accepting it is the first step. The next step is to work on a plan for more storage (or fewer databases), not robbing Peter to pay Paul.

Shrinking tempdb is almost always painful. It is hard to get timely shrinks without restarting the instance and restarting the instance is a bad habit.

Secondly, make sure that you leave extra space in the MDF and NDF files. Reindexing will need some working space. How much? It depends. Find the size of your largest table and use that as a guide. I've never gone wrong by leaving that much space in the data files. If you don't have enough space in your files, they will try to autogrow. If SQL Server lacks contiguous space in the file, you may have problems reindexing large tables; they won't ever seem to be property defragged after the reindex routines run.

Do not use auto shrink. It was intended for databases running on users workstations, not on dedicated production servers and was thought up so long ago that using it for any reason whatsoever isn't a good idea at this point.

Use dbcc shrinkfile, not dbcc shrinkdatabase. If you need to shrink a database with several *MDF*s, do it in a round-robin fashion, paying attention to the file groups that each file belongs to. In order to effectively spread I/O around, SQL Server wants files of equal size in a file group.

Generally, shrinkfile doesn't put a lot of load on the server, but when I shrink files I avoid peak demand periods and I like to watch a little more closely with Performance Monitor than I normally would.

Do not try to shrink out all of the space in one go. If you try to shrink out a large amount of space in one go, the dbcc command will often seem to go catatonic. Disk I/O will drop, but the command will not complete.

After running dbcc shrinkfile, you will want to reindex to put the data back in order. Depending on your tables, you may be able to do online reindexing, which is widely documented.

If you have shrunken the files too far the files will try to autogrow. You don't want that, so be careful to not shrink things too much. when in doubt, leave an extra GB.

File-level fragmentation tends to occur most on volumes that have lots of data files that are often grown. Frequent drop/recreation of databases may exacerbate the problem. If you are shrinking files, they should get smaller and there should be fewer fragments. There is no particular reason that defragging should make things worse, though it will put I/O load on your server, which may hurt if the storage is already heavily-loaded. Some admins swear by third-party defragging tools, but I have only used Microsoft's tool on servers. Again, I wouldn't do this during peak demand times.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a dba.stackexchange
scroll top