Question

I have previous experience of setting MAXDOP settings for on premise OLTP & OLAP servers when you know the number of cores at your disposal. e.g. if I knew I had 16 at my disposal I'd set my MAXDOP to 2 or 4 for my OLTP system.

I just joined a project and noted the core production database is set to MAXDOP of 1, prohibiting any parallel operations.

It is a dedicated server (so no elastic pool) Premium P11 server so DTUs abstract the exact amount of CPU, memory capability. Without being able to tell how many cores you are dealing with, is there a sensible starting config? finding it a little tricky to port on prem best practices to Azure SQL.

The test servers are a different config with a shared elastic pool, so no easy way to test specific execution plans there either (especially as everything is written through Entity framework ORM)

Was it helpful?

Solution

DTU is a blended measure of CPU, memory, and data I/O and transaction log I/O…. This means that there is no definitive measure of equating x DTU = x CPU, X Memory etc.

Since, you have Premium P11 tier - this means that you have 1750 DTUs. Based on this blog post - By Andy Mallon, below is the CPU-to-DTU-to-Service Tier mapping :

+--------------+------+---------------+ | Number Cores | DTUs | Service Tier | +--------------+------+---------------+ | 1 | 100 | Standard – S3 | | 2-4 | 500 | Premium – P4 | | 5-8 | 1000 | Premium – P6 | | 9-13 | 1750 | Premium – P11 | | 14-16 | 4000 | Premium – P15 | +--------------+------+---------------+

You should use Query Performance Insight for Azure SQL Database to review and adjust the db workload settings. Since you have db scoped configuration as non default (0 is default) - set to 1, you will need to review it and potentially change it so that you can utilize all the cores.

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top