Question

The article SQL Server tempdb Best Practices Increase Performance suggests that I should split tempdb into a number of files equal to the number of cores. So for 4 cores you get 4 files.

By having the larger number of files, you can increase the number of physical I/O operations that SQL Server can push to the disk at any one time. The more I/O that SQL Server can push down to the disk level, the faster the database will run. With standard databases, SQL Server can cache a large amount of the data that it needs into memory. Because of the high-write nature of the tempdb, the data needs to be written to the disk before it can be cached back up into memory.

Though it sounds good in theory, is it really that good as a general optimisation? Is it something that may only apply for specific systems where IO is very high?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with dba.stackexchange
scroll top