سؤال

Using SQL Server 2008 R2, the main transactional table in a vendor database is massive compared to all others and has about 14 indexes. Some of these indexes don't get used in our environment, but we can't remove them. That's not a problem, it's just something we have to live with.

The question I have is about why some of these low-or-no-read indexes seem to be using memory - more than other indexes on the same large table that get used often. I would have thought that most of the buffer cache would go towards objects that are read from frequently. The only thing happening on these indexes is write overhead.

For example, one of these low-read indexes is being allocated about 2 GB of memory (58% of the index's total size) and another has 1.7 GB of memory (27% of its size). Meanwhile, the monster-sized and well-used clustered index itself only has 4 gigs (2% of its size). A different NC index with lots of reads only has 100 MB in the buffer cache (5% of its size).

Looking at the physical stats, I can see the fragmentation is pretty bad. That's understandable from all the writes on this table and the non-sequential inserts. I'm not sure if it could be related to memory usage, though.

Looking at the operational stats for these indexes is also interesting.

  • The leaf_ghost_count reminds me that there are just about as many deletes taking place on this table as there are inserts (from a regular cleaning process).
  • One of these low-read indexes has some of the highest page_lock_wait values in the database. Perhaps that's only because of the writes?
  • 2 others have some of the highest page_io_latch_wait values. I understand that latch usage would be related to memory usage, so that makes sense.

I realize this is an abstract question and that I'm not providing many actual stats. I'm just curious about how SQL Server comes to these buffer cache usage decisions and wonder if anyone out there understands it.

لا يوجد حل صحيح

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى dba.stackexchange
scroll top