Question

I have a database which takes up 500gb, I have noticed that my ple is low even though I have 100gb of RAM. I have searched in google to find reasons of that and I found some interesting queries. I have a query which runs on very big table even though it uses clustered index it causes ple to drop. I checked what is in buffer pool before and after query and moment after query is done and it looks like DB has to free some space for index used in my query and after query old indexes are loaded to the buffer pool. My question is how much RAM memory should DB have, how could I calculate it, should it be amount which will let me keep all used indexes in memory at once? Or maybe I should reduce data kept in tables so indexes will be smaller?

Was it helpful?

Solution

Every time you scan a big index (or a big table, if you prefer) that data has to be brought from disk to memory. When this happens, if the buffer pool is holding data from other objects and does not have enough free space to accomodate the pages you are reading from disk, something has to be discarded.

Low PLE means that this process happens too often, thrashing your buffer pool and overwhelming your I/O subsystem.

Some ways to improve it:

  1. Add more RAM. Easy and not very expensive nowadays. Doesn't really solve the problem, but can be an acceptable short-term solution. How much RAM? Enough to hold the active portion of your databases. Only you can tell how much that is. Not all versions/editions of SQL Server and Windows can use the same amount of RAM, so make sure you're not limited by that.
  2. Purge some data. Are you sure you need all the data you have in there?
  3. Create smaller indexes. That means including in your indexes the minimum set of columns that cover your queries. Big scans work faster on smaller indexes and require less RAM. The tradeoff is more space used on disk and more operations to perform when updating the table.
  4. Create filtered indexes. If you are often filtering on a common non-selective condition (eg: active=true or something similar) a filtered index might help reducing the size of the indexes. This has some impact on the application as well (some SET options restrictions to make it work).
  5. Use compression. Data is compressed both on disk and in memory, so ROW/PAGE compression can be a way to reduce memory comsumption. You will need Enterprise Edition and you have to give up some CPU, but it's often worth it for huge objects.
  6. Use the correct data types: using (n)char over (n)varchar or int when tinyint is enough will waste not only disk space, but also buffer pool space. This includes character and numeric data types: make sure you're using the right type/size/precision/scale for your data columns.

OTHER TIPS

even though I have 100gb of ram memory.

What version and edition of SQL Server are you using? 2012 (and earlier) standard edition will only use up to 64Gb so if you are using those there is little point adding more memory in this situation. Even in later releases the limit is 128Gb for standard edition. Of course if you are running enterprise then such limits are not present.

I have a query which runs on very big table even though it uses clustered index it causes ple to drop.

"even though it uses clustered index" - if it is scanning the clustered index then it is essentially reading the whole table so unless the query needs to consider every column of every row to produce its result then there is potential for refactoring the query and/or updating the way the tables involved are indexed. We can't help further on that without details (ideally: table definitions including indexes & keys, the query, the estimated & actual query plans) we can't help more there.

My question is how much RAM memory should DB have, how could I calculate it

This is a difficult question unfortunately. Many large articles have been written on the subject.

should it be amount which will let me keep all used indexes in memory at once?

The rule of thumb is "enough for the common working set to always be in memory" (in many cases "all used indexes in memory" means more-or-less the same thing) which is usually a small multiple of the common working set - but you can think of this the other way around too: how, for the same amount of data, i.e. "can I better design my indexes and queries to reduce the common working set required?". If you never scan a full index or heap then you don't need to make sure that the whole lot stays in RAM.

Or maybe I should reduce data kept in tables so indexes will be smaller?

If you need the data then you need the data. you could archive off old data to another DB though this is usually done to save space and processing time for full backups and so forth rather than for run time issues: a well designed/indexed DB with well behaved apps generally shouldn't need this.

Apart from Existing answers,you will need to know how RAM is utilised by SQL Server.

Suppose you have a box with 100 GB RAM dedicated for SQL Server only.SQL Server will utilise all this RAM until you constrain it and lets say you constrain SQL Server maximum memory to 94 GB...

Lets understand how this 94GB memory is used and what are the main components that use this memory..

--running below DMV gives me total list of components which may use this 94GB RAM

select (sum(pages_kb))/1024 as 'sizein_mb',type as 'clerkttype'
from sys.dm_os_memory_clerks
group by type
order by (sum(pages_kb)*128)/1024 desc

sizein_mb   clerkttype
92469779    MEMORYCLERK_SQLBUFFERPOOL
2889702 CACHESTORE_OBJCP
786610  CACHESTORE_SQLCP
274682  OBJECTSTORE_LOCK_MANAGER
221056  MEMORYCLERK_SOSNODE
206657  USERSTORE_SCHEMAMGR
186691  USERSTORE_TOKENPERM
125331  USERSTORE_DBMETADATA
84542   MEMORYCLERK_SQLSTORENG

As you can see from the above output,There are many components that are using memory right now and also We can see Buffer Pool is the largest consumer of memory ,followed by plan Cache,lock manager..

One important point to note is the above components would adjust memory among themselves.Say for example,Memory utilized by plan cache will be stolen from buffer pool,so if your plan cache is filled up with Adhoc plans,buffer pool will have less memory to operate and it in turn forces pages to disk which is not good..

Now you have understand how the RAM you set as limit is used,its good to know what are the objects that can use how the 6 GB RAM you left to OS is utilized..

Below are the components that derive their memory in 6 GB

1.Os itself
2.Linked servers
3.Extended stored procs
3.third party dlls or addins
4.Database email
4.SSIS
5.SSRS
6.SSAS and so on

Now that you have understood how SQL utilizes the RAM,you can easily make a decision and start troubleshooting if any issue arises as well

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top