Question

We were viewing sys.dm_os_buffer_descriptors using below query

select 
 d.[name]                                       [Database_Name],
 (count(file_id) * 8) / 1024                    [Buffer_Pool_Size_MB],
 sum(cast(free_space_in_bytes as bigint)) / 1024 / 1024     [Free_Space_MB]
from sys.dm_os_buffer_descriptors b
    join sys.databases d on
        b.database_id = d.database_id
group by d.[name]
order by [Buffer_Pool_Size_MB]

And for one of my databases, it shows [Buffer_Pool_Size_MB] = 77325 Megabytes, and [Free_Space_MB] = 15849 Megabytes

So its about 20% of space in pages in buffer pool is empty. Seems like a waste of resources

Questions:

  • Is this a problem ?
  • How can number of free_space_in_bytes be mitigated ?
  • Any other things to investigate / look at in our situation ?
Was it helpful?

Solution

Is this a problem ?

Maybe, maybe not. If your database(s) only make up 80% of your server's max server memory, it's about what I'd expect to see.

If you have more data than RAM, then the free space is likely due to something else asking for memory. Something else could be memory grants for queries, CHECKDB, index rebuilds, etc.

It could also represent a memory fight between Windows and SQL Server. This usually happens when you don't have max server memory set to give Windows its 10% memory tithe so it can function properly.

After that something else took the memory, it either hasn't been re-used by the buffer pool, or various something elses continue to use memory that could be used by the buffer pool.

How can number of free_space_in_bytes be mitigated ?

If your database(s) represent ~80% of max server memory, nothing.

If they don't, you can:

  • Add memory
  • Try to get something elses to ask for less memory

Adding memory usually helps, if you add enough to both cache your most-queried objects and provide memory grants to processes that need them. Figuring that number out is an exercise left to you. There's no generic calculation, but a good place to start is having RAM at 50% of server data. In simple terms: if you have 10GB of data, have 5GB of RAM.

Getting something elses to ask for less memory comes down to figuring out what else is asking for memory, and tuning those things appropriately.

Any other things to investigate / look at in our situation ?

Yep! Look at the Stolen Pages counter:

SELECT *
FROM sys.dm_os_performance_counters AS dopc
WHERE dopc.counter_name LIKE 'Stolen Server Memory%';

That can tell you how much memory SQL Server has taken from the buffer pool and given to something elses.

You can also look to see if SQL Server has had to force any query memory grants:

SELECT *
FROM sys.dm_exec_query_resource_semaphores AS deqrs
WHERE deqrs.forced_grant_count > 0;

Since you're on SQL Server 2017, you can use sp_BlitzCache:

EXEC sp_BlitzCache @SortOrder = 'memory grant';

Which will show you queries that ask for the largest memory grants. Of course, if your server is really low on memory, your plan cache data might stink, because turnover there will likely be really high. If that's the case, stop here and add memory before going any further.

You can try using Query Store, if that's turned on. Proc is available from the same link as above.

EXEC sp_BlitzQueryStore @DatabaseName = 'Your Important Database';

Then look for the "memory grant" pattern in the results. For the queries those bring back, look for Sorts and Hashes in the plans. Those usually represent index or query-tuning opportunities. If you need help tuning those, feel free to post them as new questions. Don't tack them on to this one.

And, of course, it could also be that you just never use that memory. I'd leave it alone, though, because you might grow into that 20% someday.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top