Is having a 92MB MDF with a 58MB LDF cobo ok? How do I manage the log to keep things running well?

dba.stackexchange https://dba.stackexchange.com/questions/1518

  •  16-10-2019
  •  | 
  •  

سؤال

I have jobs in place to trim tables used for history and logging so to keep them trim. I want to make sure I don't neglect the log file.

How and what can I or should I do to keep the log file in check?

The sql script I have running on a nightly schedule is:

declare @DBname varchar(500)
set @DBname = 'E:\Database\backup\PMIS_backup_'+ convert(varchar(MAX), getdate(), 23 ) +'.bak'

BACKUP DATABASE [PMIS] TO  DISK = @DBname
WITH NOFORMAT, NOINIT,  NAME = @DBname
, SKIP, REWIND, NOUNLOAD,  STATS = 10

(recovery mode is simple)

هل كانت مفيدة؟

المحلول

If your database is in FULL or BULK_LOGGED recovery mode, you need to backup your database and log files on a regular basis. If your database is in SIMPLE recovery mode, then you only need to backup your database on a regular basis.

Please read the following articles for more info:

نصائح أخرى

Not sure if the question was one of Backup strategy or Logfile sizes.

Eric explained the backup modes.

If you're worried about the size of the log file you could also set the logfile to autogrow. SQL SERVER will allow you to autogrow the logfile by a percentage or by a set number of megabytes. You can also set an absolute size limit if you're log files have a restricted amount of growth space.

If you go this route you will most likely want to shrink the log files as part of your regular maintenance routine.

With a recovery mode of simple I dont believe there is much you need to do to the log files. In simple mode SQL Server should not be persisting the transactions in the log file for very long.

I know in oracle the log files are overwritten once the engine gets to the end of the file and transactions are set as no longer being needed. I'm not sure if SQL Server follows this same methodology or if the Log file is cleared as soon as the transactions are done processesing or a checkpoint event occurs.

I think if you are looking for an optimum size for the log file in this state (and if you have some flexability for testing) I'd set the log files to autogrow by a few mb and set the initial size pretty low. Then let it run for an itteration or two and keep an eye on the size.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى dba.stackexchange
scroll top