Question

I have been going through an article that explains the unwanted side effects of Setting a Processor Affinity in SQL Server.

What I have come to know is that in most of the cases the default setting of SQL Server works best as when we are setting processor affinity, we are making the scheduler CPU bound. I have a couple of questions here:

  1. When the default setting of Processor affinity works best, then why does the need arise to set Processor affinity explicitly?

  2. The article also states:

    With the trace flag 8002 turned on, the scheduler is not bound to CPU anymore.

    Is it justified to say that trace flag 8002 being ON is just like the default Processor affinity configuration with the exception of having less number of cores available for SQL Server?

    For example, if we set the processor affinity to 16 cores for a 32 core processor (with trace flag 8002 as ON), then would the schedulers that have SQL threads associated with them, will have only 16 cores to toggle?

  3. Setting a Processor affinity restricts the SQL Server Scheduler to use some cores. Does this setting also restricts the core usage for other CPU intensive processes, such as a .NET application?

    What I am trying to say is, if for a 32 core CPU, the processor affinity is set to 16 core, then SQL Server scheduler has only 16 cores to toggle. If yes, then what about another CPU intensive application such as a .NET application? Does the .NET app have the liberty of switching between all 32 cores, since it will be dependent on Windows OS and not SQLOS?

It is just a hypothetical scenario I had imagined. Given a production scenario, I highly doubt people would go for SSMS and a CPU intensive .NET application on the same server. That will greatly hamper performance and scarce the CPU availability for any other process. Correct me if I am wrong here.

Was it helpful?

Solution

  1. Processor affinity should be set in the specific case that your workload fits into the processor cache. This would be rare in SQL Server but might happen when running analytics inside the DB e.g. with R
  2. I believe that is correct, yes
  3. No it does not - you would need to selectively set affinity for each workload. Typically if you had say 32 cores you might reserve 2 for the OS and divide up the others - but taking care to ensure that you keep cores and caches together. You could create a CPU set that uses a subset of cores on each CPU and kill performance with cross cache chatter!

Generally I would advise against this unless you are very sure that you need it, and have a good understanding of the underlying hardware, how its NUMA boards are arranged, etc. You won’t damage anything but you can degrade performance quite badly. If you need to segregate workloads, then a better option on a modern system is to use Hyper-V in 99% of use cases.

Note also that affinity masks will be deprecated anyway and the new method is NUMA-aware.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top