Question

We have a 6 step where we copy tables from one database to another. Each step is executing a stored procedure.

  1. Remove tables from destination database
  2. Create tables in destination database
  3. Shrink database log before copy
  4. Copy tables from source to destination
  5. Shrink the database log
  6. Back up desstination database

during the step 4, our transaction log (ldf file) grows very large to where we now have to consistently increase the max size on the sql server and soon enough (in the far furture) we believe it may eat up all the resources on our server. It was suggested that in our script, we commit each transaction instead of waiting til the end to commit the transactions.

Any suggestions?

Was it helpful?

Solution

I'll make the assumption that you are moving large amounts of data. The typical solution to this problem is to break the copy up in to smaller number of rows. This keeps the hit on transaction log smaller. I think this will be the preferred answer.

The other answer that I have seen is using Bulk Copy, which writes the data out to a text file and imports it into your target db using Bulk Copy. I've seen a lot of posts that recommend this. I haven't tried it.

If the schema of the target tables isn't changing could you not just truncate the data in the target tables instead of dropping and recreating?

OTHER TIPS

Can you change the database recovery model to Bulk Logged for this process?

Then, instead of creating empty tables at the destination, do a SELECT INTO to create them. Once they are built, alter the tables to add indices and constraints. Doing bulk copies like this will greatly reduce your logging requirements.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top