Frage

I need to fill in transactional log file in possibly shortest amount of time.

I have tried a few variants of a loop of inserts and deletes on the same record. I fired it on separate tables in several sessions at the same time. The result was not bad but it is still too slow - about 1 GB per minute. And my log file is 500 GB (Despite regular backups. It is a very busy system, and put in Availability Group).

Could you suggest a clever and effective way to fill in the file?

And in case you are wondering why I am doing such a strange thing. I need to temporarily create a secondary log file and switch the log writer header to it. Then I will be able to truncate and recreate the primary log file. Because of too small autogrowth, it consists now of some 15,000 VLFs which I want to fix.

War es hilfreich?

Lösung

Two aspects:

1: Why not just empty the log (BACKUP LOG or (temporarily) switch to simple recovery model) and shrink the log file to an extremely small size. Then expand it (once or a few times, based on what version of SQL server you are on and how many VLFs you want) to the size you want. That is IMO the easiest way to manage large number of VLFs.

2: Assuming above isn't doable, you want to do many modifications in the same transaction. One way I typically do this is to create a dummy table and then populate it as an insert with a subselect. I typically use sys.columns for the subselect, cross join it three times and do a TOP with as many rows as you find convenient. Below is an example. 10,000,000 rows made the log file grow from 8 MB to 4GB MB in 35 seconds. That final DELETE caused the log file to grow to 6 GB in 30 seconds. (I'm using DELETE instead of TRUNCATE since DELETE will consume even more log space, whereas TRUNCATE uses very little.) You can now repeat this over and over again.

USE master
DROP DATABASE x


CREATE DATABASE x
GO

USE x

CREATE TABLE dummy(c1 int identity, c2 char(80))
GO

INSERT INTO dummy(c2)
SELECT TOP(10000000) 'Hello'
FROM sys.columns AS a, sys.columns AS b, sys.columns AS c

DELETE FROM dummy

Andere Tipps

This is an XY problem. You want to fix the VLFs in your log file, not fill a transaction log so it overflows into a second log file, freeing up the first log file.

There are definitely ways to stabilize VLFs, even in an AG, even in full recovery, and without disrupting your log chain, creating a second log file, or sending a bunch of bogus transactions to your secondaries. Here is a long-winded but easily reproducible example.


First, let's make a database with less than optimal filegrowth for the log. This will generate some pretty bad VLF layout. Let's also take a backup so it isn't operating in pseudo-simple mode:

CREATE DATABASE abc 
      ON (name = N'abc_data', filename   = N'c:\temp\abc.mdf')
  LOG ON (name = N'abc_log',  filename   = N'c:\temp\abc.ldf', 
          size = 1024KB,      filegrowth = 1024KB);
GO

ALTER DATABASE abc SET RECOVERY FULL;
GO

BACKUP DATABASE abc TO DISK = 'c:\temp\abc.bak' WITH INIT, COMPRESSION;
GO

Now, let's look at the VLF and file info (note that your metrics may differ slightly). Before adding any data:

SELECT current_vlf_size_mb, total_vlf_count, active_vlf_count, 
  avg_size_mb = total_log_size_mb/total_vlf_count , [used_log_%]
    = (SELECT used_log_space_in_percent FROM abc.sys.dm_db_log_space_usage)
FROM sys.dm_db_log_stats(DB_ID(N'abc'));

-------------------  ---------------  ----------------  -----------  ----------
current_vlf_size_mb  total_vlf_count  active_vlf_count  avg_size_mb  used_log_%
-------------------  ---------------  ----------------  -----------  ----------
0.242187             4                1                 0.24804675   49.6063

Let's create a table and generate a bunch of log activity to encourage some sub-optimal log growth (this is to simulate the problem in a new database, not to solve the problem you already have):

SET NOCOUNT ON;
GO

CREATE TABLE dbo.flarb(a int, x char(4020) not null default 'x');
GO

BEGIN TRANSACTION;
INSERT dbo.flarb(a) SELECT object_id FROM sys.all_objects;
COMMIT TRANSACTION;

BEGIN TRANSACTION;
INSERT dbo.flarb(a) SELECT object_id FROM sys.all_objects;
COMMIT TRANSACTION;
GO 500

After we add that data, let's run query 1 again (and also EXEC abc.sys.sp_helpfile;):

-------------------  ---------------  ----------------  -----------  ----------
current_vlf_size_mb  total_vlf_count  active_vlf_count  avg_size_mb  used_log_%
-------------------  ---------------  ----------------  -----------  ----------
1                    11035            11032             0.997552532  99.97239

name     size  
-------  -----------
abc_log  11272192 KB 

Your data may vary depending on SQL Server version and how many objects are in your database, but order of magnitude should be similar. Let's see what happens when we try to shrink our log file:

DBCC SHRINKFILE(abc_log, 4000);

Running query 1 + sp_helpfile again:

-------------------  ---------------  ----------------  -----------  ----------
current_vlf_size_mb  total_vlf_count  active_vlf_count  avg_size_mb  used_log_%
-------------------  ---------------  ----------------  -----------  ----------
1                    11033            11032              0.99755208  99.9913

name     size  
-------  -----------
abc_log  11270144 KB

I mean, it shrank a little wee bit, but didn't have a meaningful impact on VLFs. Think about it - it can't shrink the log because the log hasn't been backed up, so all that data is still active and can't be removed. So let's try backing up the log, then running another shrink, and run the query again:

BACKUP LOG abc TO DISK = 'C:\temp\abc.bak.1.trn' WITH INIT, COMPRESSION;
DBCC SHRINKFILE(abc_log, 4000);

-------------------  ---------------  ----------------  -----------  ----------
current_vlf_size_mb  total_vlf_count  active_vlf_count  avg_size_mb  used_log_%
-------------------  ---------------  ----------------  -----------  ----------
0.242187             11033            6                 0.997552088  3.1719937

name     size  
-------  -----------
abc_log  11270144 KB

No good. While we got the log usage down to 3%, we still couldn't reduce the size of the log. Let's try again:

BACKUP LOG abc TO DISK = 'C:\temp\abc.bak.2.trn' WITH INIT, COMPRESSION;
DBCC SHRINKFILE(abc_log, 4000);

-------------------  ---------------  ----------------  -----------  ----------
current_vlf_size_mb  total_vlf_count  active_vlf_count  avg_size_mb  used_log_%
-------------------  ---------------  ----------------  -----------  ----------
0.265625             11033            5                 0.997552088  3.1494207

name     size  
-------  -----------
abc_log  4096000 KB

Getting closer! This time, the log file agreed to shrink, but the VLF count is still out of proportion. So, we'll try one more time:

BACKUP LOG abc TO DISK = 'C:\temp\abc.bak.3.trn' WITH INIT, COMPRESSION;
DBCC SHRINKFILE(abc_log, 4000);

-------------------  ---------------  ----------------  -----------  ----------
current_vlf_size_mb  total_vlf_count  active_vlf_count  avg_size_mb  used_log_%
-------------------  ---------------  ----------------  -----------  ----------
0.265625             4027             1                 0.993293317  3.153424

name     size  
-------  -----------
abc_log  4096000 KB

This time, it worked a little better. We got the size down, and the log % down, and VLFs are reduced, but still too high (and average size is too small).

So let's try halving the log file, then growing it in even, 8GB chunks. Using 64GB as an example; you may need more, or that may be ridiculous. But the point is to pick a size it's never exceeded before, because if you want to control VLFs, you're going to want to prevent it from auto-growing, and if it grew to 120GB before, it will grow to 120GB again (Lots of background here):

DBCC SHRINKFILE(abc_log, 1);
GO
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size =  8000MB);
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 16000MB);
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 24000MB);
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 32000MB);
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 40000MB);
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 48000MB);
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 56000MB);
ALTER DATABASE abc MODIFY FILE (name = 'abc_log', size = 64000MB);
-- should probably fix auto-growth here, too

Now, query 1 + sp_helpfile returns:

-------------------  ---------------  ----------------  -----------  ----------
current_vlf_size_mb  total_vlf_count  active_vlf_count  avg_size_mb  used_log_%
-------------------  ---------------  ----------------  -----------  ----------
0.265625             132              1                 484.8484257  0.00710449

name     size  
-------  -----------
abc_log  65536000 KB

Your results may vary - you may have 1000 VLFs, you may have 700, it can vary depending on how the shrink and growth operations work through the log. But you should be in a better place. You can keep experimenting with sizes that get you toward an optimal VLF count/size (say, 0.5 GB per VLF).

Remember these log backups were introduced into your log chain. So you should put them with your other log backups, just in case you need to recover, or do this right before your next full backup.

Also, you may wonder, why wasn't one log backup enough? Some background here.

I think you are talking about oracle redo log files. In order to quickly fill in redo log files, you need to pick a very large table, for example, 100 million of rows, and if the table has Date type column, below update will create very large redo:

update big_table set date_column = sysdate where rownum < 500000;
Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit dba.stackexchange
scroll top