Finding reason for transaction log 'tempdb' full due to 'ACTIVE_TRANSACTION' [duplicate]
-
13-03-2021 - |
문제
Our application stopped working suddenly due to The transaction log for database 'tempdb' is full due to 'ACTIVE_TRANSACTION'.
. Now, it start working again. But how can I know the exact reason for this error came?
해결책
You can use this query to get a list of the actively running queries throughout the server including their process ID, how long they've been running for, which database they're running in, who ran them, wait stats info, blocking query IDs, etc. I recommend putting it inside of a stored procedure so that you can easily call it as needed (even with it's default parameters) and you can routinely schedule it with a SQL Agent job to log the results to a table.
DECLARE @Database VARCHAR(100) = NULL
DECLARE @EntityName VARCHAR(100) = NULL
DECLARE @ExcludeMyQueries BIT = 0
DECLARE @RunningThreadsOnly BIT = 0
DECLARE @LocksOnly BIT = 0
DECLARE @MinDuration INT = 0
DECLARE @OutputMode INT = 0 -- TODO: Setup formally documented modes, and clean up usage below, can even have different kinds of modes like Verbose, vs EmergenciesOnly, etc
DECLARE @ExecutionDate DATETIME = GETDATE();
SELECT
SP.HostName,
SP.SPID,
ER.request_id AS RequestId,
ER.percent_complete AS PercentComplete,
DATEDIFF(s, start_time, @ExecutionDate) AS Duration,
CAST(((DATEDIFF(s, start_time, @ExecutionDate)) / 3600) AS VARCHAR) + ' hour(s), '
+ CAST((DATEDIFF(s, start_time, @ExecutionDate) % 3600) / 60 AS VARCHAR) + 'min, '
+ CAST((DATEDIFF(s, start_time, @ExecutionDate) % 60) AS VARCHAR) + ' sec' AS RunningTime,
CAST((estimated_completion_time / 3600000) AS VARCHAR) + ' hour(s), '
+ CAST((estimated_completion_time % 3600000) / 60000 AS VARCHAR) + 'min, '
+ CAST((estimated_completion_time % 60000) / 1000 AS VARCHAR) + ' sec' AS EstimatedTimeRemaining,
DATEADD(SECOND, estimated_completion_time/1000, @ExecutionDate) AS EstimatedCompletionDate,
ER.Command,
ER.blocking_session_id AS BlockingSessionId,
LastWaitType,
SP.[DBID],
DB_NAME(SP.[DBID]) AS DbName,
--[TEXT] AS EntityText,
CPU,
ER.plan_handle AS PlanHandle,
ER.query_plan_hash AS QueryPlanHash,
LOGIN_TIME AS LoginTime,
LOGINAME AS LoginName,
SP.[Status],
[PROGRAM_NAME] AS ProgramName,
NT_DOMAIN AS NT_Domain,
NT_USERNAME AS NT_Username,
@@SERVERNAME AS ServerName,
@ExecutionDate AS ExecutionDate
INTO #ExecutingQueries
FROM sys.sysprocesses SP
INNER JOIN sys.dm_exec_requests ER
ON sp.spid = ER.session_id
WHERE --TEXT NOT LIKE N'%spGetRunningQueries%'
--AND
DB_NAME(SP.dbid) NOT IN ('msdb','master','Distribution')
AND
(
@Database IS NULL
OR (@Database IS NOT NULL AND @Database = DB_NAME(SP.[DBID]))
)
AND
(
@ExcludeMyQueries = 0
OR (@ExcludeMyQueries = 1 AND hostname <> HOST_NAME())
)
AND
(
@RunningThreadsOnly = 0
OR (@RunningThreadsOnly = 1 AND SP.[Status] = 'RUNNABLE')
)
AND
(
@LocksOnly = 0
OR (@LocksOnly = 1 AND ER.blocking_session_id <> 0) -- TODO: Show the source running query that IS the blocking session ID (will need to join / union this in somehow?)
)
AND DATEDIFF(s, start_time, @ExecutionDate) >= @MinDuration
-- TODO: Clean this up and DON'T USE SELECT *
IF (@OutputMode = 0) -- Everything mode
BEGIN
SELECT *
FROM #ExecutingQueries
END
ELSE IF (@OutputMode = 1) -- Lightweight mode
BEGIN
SELECT
HostName,
SPID,
RequestId,
RunningTime,
BlockingSessionId,
LastWaitType,
DbName,
LEFT(EntityText, 100) AS EntityText,
PlanHandle,
[Status]
FROM #ExecutingQueries
END
Using this, you can look at the longest running queries (the Duration
or RunningTime
columns) and also see why they're running for so long by looking at the LastWaitType
and/or BlockingSessionId
columns.
This is a work-in-progress that I've created off of some queries I've found in the past (don't recall exactly where, but if someone recognizes it and knows the source, I'd love to cite it), that I regularly use to debug issues on SQL Server.
다른 팁
You probably have an active transaction in tempdb. You can see active transaction count.
select @@TRANCOUNT
You can also check the content of tempdb to find it.
SELECT * FROM tempdb..sysobjects
What I would do, would be to check what queries/jobs are running when you are getting this error in order to find exactly what is getting stuck.