Changing a column's data type fills the transaction log
-
16-10-2019 - |
Question
Running this code:
ALTER TABLE npidata
ALTER COLUMN npi varchar(20)
Gives this error:
Msg 9002, Level 17, State 4, Line 2
The transaction log for database 'SalesDWH' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
I am changing the NPI
column's data type from varchar(80)
to varchar(20)
.
The following code gives me the same error message:
insert into npidata1 select * from npidata
log_reuse_wait_desc
showsNOTHING
- The recovery mode is
SIMPLE
autogrowth
is set toNONE
autoshrink
is set totrue
What else can I do? My understanding is the logs should just be truncating every time they get too big. What am I doing wrong?
Solution
Try to set your log file to autogrow or temporarily add some extra log files which can handle such an operation
BUT
it may be simplier and faster to do something like this:
use SalesDWH
go
ALTER TABLE npidata ADD npiNew varchar(20)
GO
UPDATE npidata SET
npiNew = SUBSTRING(npi, 1, 20)
GO
ALTER TABLE npidata DROP COLUMN npi
GO
EXEC sp_rename 'npidata.npiNew', 'npidata.npi', 'COLUMN'
GO
DBCC CLEANTABLE(0, 'npidata', 100)
In this case log file usage can be controlled by UPDATE statement, you may perform this by small chunks like:
use SalesDWH
go
ALTER TABLE npidata ADD npiNew varchar(20), ready bit
GO
WHILE 1 = 1
BEGIN
UPDATE TOP (100) npidata SET
npiNew = SUBSTRING(npi, 1, 20),
ready=1
WHERE ready is null
IF @@ROWCOUNT = 0 BREAK
END
GO
ALTER TABLE npidata DROP COLUMN npi
GO
ALTER TABLE npidata DROP COLUMN ready
GO
EXEC sp_rename 'npidata.npiNew', 'npidata.npi', 'COLUMN'
GO
DBCC CLEANTABLE(0, 'npidata', 100)
For better performance, in case when the table is big - you may apply temporary index on column ready
OH, YES!!!
Like @RemusRusanu said - TURN OFF THE AUTOSHRINK
OTHER TIPS
the very first thing you have to do is to change AUTOSHRINK to false
. There is absolutely no reason to ever have it true
. See AUTOSHRINK: Turn it OFF!.
Both operations you are attempting require a size-of-data update in a single transaction. A single transaction requires that much log, irrelevant of the recovery model. You must increase the log size to accommodate your transaction.
Even with SIMPLE recovery mode, the log still needs enough room to write the changes related to the transactions that happen between checkpoints.
Running with a fixed-size log is always risky, for this very reason.
The fix is to increase the size of the log, and/or enable auto-growth.
You can try forcing a checkpoint before you run the other commands, but it's a very hit-or-miss thing. The command is:
CHECKPOINT
Also, turn off autoshrink. Shrinking your log except under very unusual circumstances is a bad, bad idea -- much less doing it on a regular basis.