문제

Problem

We have a Sql Server 2014 with a 1TB attached disk (on Azure) that's running out of disk space. We have about 20GB's left (maybe a few weeks of space). As such, we need to move some data off the CURRENT disk and onto a NEW disk.

Details

Server:

Microsoft SQL Server 2014 - 12.0.2548.0 (X64) 
    Jun  8 2015 11:08:03 
    Copyright (c) Microsoft Corporation
    Web Edition (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)

MS-SQL 2014 is installed onto a classic Azure VM. This VM is in a classic VNET. The storage DISKS are also classic. As such, we couldn't just expand the existing disk. MS-Support said that we need to update all of these, if we wish to leverage the modern Azure Storage to allow disk resizing, using the newer SSD disk, etc. TL;DR; we can't take this offline for hours including all the other subsystems that communicate with the VM via IP address. Now, don't turn this into a flame-fest .. this is what we've been given to work with and will need to fix all of this up later.

So right now, an idea what to try and leverage FILEGROUPS and to move one or more tables into a FILEGROUP and push this FILEGROUP onto another DISK which we have attached.

So the questions here are:

  • Is this a really really crazy and lame idea, first of all?
  • If it's crap but OK, then will using FILEGROUPS actually move the data from the CURRENT disk onto the NEW disk (which frees up some disk space on the nearly full CURRENT DISK)?
  • If this is still possible, does moving these tables mean the data is locked/unavailable ... which means were still back to our initial problem :(
  • What about logs? Moving this data means the logs just get a copy of this? (we are doing hourly and weekly backups, I believe).

It's ok if the data is small, but here's a quick look at some of our tables...

enter image description here

That first table is massive (with respect to the rest of the data). 750GB ish.

I was thinking of moving maybe the 2nd, 3rd of 4th line, in the result image. Remember how I said the infrastructure is all on OLD classic stuff? This means the HD's are old and slow so copying data could take some time also.

As an example, I just tried to copy the .mdf's (this DB has 1 main mdf and 2 other small ones) from OLD over to NEW disks. that had a quick ETA of 24 hours.

Having the site offline for a few hours is totally acceptable. We can take stuff offline when our customers are asleep. but ... 24 hours .. that hurts. The 24 hour idea was a simple test for:

  • Create new 2TB Disk (if possible)
  • turn off sql server.
  • copy mdf + log files to new disk. (24 hours or so)
  • point filegroups from old location to new location
  • start sql server again.

Now, we're open to ideas and I know that stack exchange is not a site for 'opinions' so I'm trying to keep this on target with a suggested answer and to get feedback on it ... but we're open to other solutions to reduce the offline time.

So - can anyone help please?

Update 1

Here's a screen shot of the current FILES for this DB.

enter image description here

도움이 되었습니까?

해결책

While it will work, it's not necessary to move tables to a new filegroup.

If you simply add a file to the database's existing filegroup(s) SQL Server will start using the new file(s).

SQL Server uses a "proportional fill algorithm" when a filegroup has multiple files. So if you add a new file (perhaps on a different volume) to a filegroup, SQL Server will write new data to that filegroup until it's the same percentage full as the current file.

다른 팁

Your idea to move tables to a new filegroup should work. It will take those tables offline for the duration of the move, but then they will be available for queries again. You can potentially limit the amount of downtime by doing them strategically, one at a time.

Note if you were on Enterprise Edition, you might be able to utilize the ONLINE index operation version of this to keep the table available for queries. Since you're on Web Edition, that's not an option.

Here I'll create a database, with a 1GB table in it:

USE [master];
GO
CREATE DATABASE [257236];
GO
USE [257236];
GO

CREATE TABLE dbo.OneGigTable
(
    Id int IDENTITY(1,1) NOT NULL,
    [BigColumn] nvarchar(max) NOT NULL,

    CONSTRAINT PK_OneGigTable PRIMARY KEY (Id)
);
GO

INSERT INTO dbo.OneGigTable
SELECT TOP (16777216) 
    N'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
FROM master.dbo.spt_values v1
CROSS JOIN master.dbo.spt_values v2
CROSS JOIN master.dbo.spt_values v3;
GO

And you can see this data file (on the default, primary file group) is mostly full:

screenshot of disk usage report in SSMS

Next I'll add a new filegroup, and add a file to that group. Note that you'll need to set the FILENAME to a path on your new disk. You should also size it appropriately based on the amount of data you expect to move into it:

ALTER DATABASE [257236]
ADD FILEGROUP NewFileGroup;
GO

ALTER DATABASE [257236]
ADD FILE
(
    NAME = NewFile_1,
    FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\NewFile_1.ndf',
    SIZE = 1024MB
)
TO FILEGROUP NewFileGroup;
GO

Now, running the "Disk Usage" report in SSMS, I can see that new file is mostly empty:

enter image description here

Now I'll rebuild the table onto the new filegroup:

CREATE UNIQUE CLUSTERED INDEX PK_OneGigTable
ON dbo.OneGigTable (Id)
WITH (DROP_EXISTING = ON)
ON NewFileGroup;

And you can see now that the primary filegroup is mostly empty, and all of the data has moved to the new filegroup:

screenshot of SSMS disk usage report

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 dba.stackexchange
scroll top