Question

I have a client with a 2 TB database for a pension company. The database contains a lot of big documents.

They want to archive everything that's not from this year. So far no problem - I'll just do a while loop for batches insert and delete and then backup the log after every batch.

After I perform the archiving can I shrink the database? How long can it take if we're talking about maybe 400GB that was moved?

It sounds like a very heavy process for a server like that. I can do it on the weekend but I'm not sure if it'll stop running in time.

Was it helpful?

Solution

Firstly this needs to pre-mentioned that only shrink files if its actually needed, as you'll cause a lot of index fragmentation which you'll need to clear up afterwards.

With that said, if you're worried about the time it takes to run the entire shrinkfile then you can do it in stages (takes longer overall but technically can be stopped and resumed with a lot more ease), personally I use this to drop it in 2GB stages (also useful on high transactional dbs to reduce contention for the operation to shrink in smaller chunks I've found)

DECLARE @value int = 2048
DECLARE @target int = 1536
SET @value = (@value *1024)-2048

WHILE @value >= (@target*1024)
BEGIN
    DBCC SHRINKFILE(filename ,@value)
    SET @value = @value - 2048
END

Replace

@value with the current size of your file in MB and the

@target with what size you want to shrink down to and

filename with the name of the file you wish to be shrinking

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top