Frage

Sitecore 6.6

I'm speaking with Sitecore Support about this as well, but thought I'd reach out to the community too.

We have a custom agent that syncs media on the file system with the media library. It's a new agent and we made the mistake of not monitoring the database size. It should be importing about 8 gigs of data, but the database ballooned to 713 GB in a pretty short amount of time. Turns out the "Blobs" table in both "master" and "web" databases is holding pretty much all of this space.

I attempted to use the "Clean Up Databases" tool from the Control Panel. I only selected one of the databases. This ran for 6 hours before it bombed due to consuming all the available locks on the SQL Server:

    Exception: System.Data.SqlClient.SqlException
    Message: The instance of the SQL Server Database Engine cannot obtain a LOCK 
resource at this time. Rerun your statement when there are fewer active users. 
Ask the database administrator to check the lock and memory configuration for 
this instance, or to check for long-running transactions.

It then rolled everything back. Note: I increased the SQL and DataProvider timeouts to infinity.

Anyone else deal with something like this? It would be good if I could 'clean up' the databases in smaller chunks to avoid overwhelming the SQL Server.

Thanks!

War es hilfreich?

Lösung

Thanks for the responses, guys.

I also spoke with support and they were able to provide a SQL script that will clean the Blobs table:

DECLARE @UsableBlobs table(
    ID uniqueidentifier
    );

INSERT INTO 
    @UsableBlobs    
select convert(uniqueidentifier,[Value]) as EmpID from [Fields]
where [Value] != '' 
and (FieldId='{40E50ED9-BA07-4702-992E-A912738D32DC}' or FieldId='{DBBE7D99-1388-4357-BB34-AD71EDF18ED3}') 
delete top (1000) from [Blobs] 
where [BlobId] not in (select * from @UsableBlobs)

The only change I made to the script was to add the "top (1000)" so that it deleted in smaller chunks. I eventually upped that number to 200,000 and it would run for about an hour at a time.

Regarding cause, we're not quite sure yet. We believe our custom agent was running too frequently, causing the inserts to stack on top of each other.

Also note that there was a Sitecore update that apparently addressed a problem with the Blobs table getting out of control. The update was 6.6, Update 3.

Andere Tipps

I faced such a problem previously, and we had contacted Sitecore Support.

They gave us a Sitecore Support DLL, and suggessted a Web.Config change for Dataprovider -- from main type="Sitecore.Data.$(database).$(database)DataProvider, Sitecore.Kernel" to the new one.

The reason I am posting on this question of yours is that because the most of the time taken for us was in Cleaning up Blobs and and they gave us this DLL to increase Cleanup Blobs speed. So I think it might help you too.

Hence, I would like to suggest if you could please request Sitecore Support in this case, I am sure you might get the best solution to solve your case.

Hope this helps you!

Regards, Varun Shringarpure

If you have a staging environment I would recommend taking a copy of database and try to shrink the database. Part of the database size might also be related to the transaction log.

if you have a DBA please have him (her) involved.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top