SQL Server default configurations that you should change

Posted in: Microsoft SQL Server, Technical Track

Who here takes the time to adjust default values for SQL Server Configuration after installing it? Unless your company has specific Post-Installation Tasks procedures/document, SQL Server configurations are left as is until an issue comes along. If you are doing your job diligently, there are a few items you’d want to configure after installation not just for security but also for performance. Whenever we have a new client, we’d like to do a Best Practice Check to make sure SQL Server is configured correctly and more often than not, changing these few SQL Server Configurations will already have a noticeable effect on performance.

So here’s my top list of things you should not leave as default and always configure accordingly. As with any advice, make sure to test any changes on your lower environments first and check on the behavior before implementing in production.


Parameter: backup compression default

Default Value: 0

Why is it Bad?
Well not really bad since storage is very cheap now, but why use additional resources if you have the option not to? If it is there you may as well use it. There is no downside to having smaller backups. This parameter tells SQL Server to compress backup by default.

What Should It be?
Enable it, set it to 1. This way even if whoever creates your backup job or takes a backup, it will be compressed. If you are still using SQL Server 2005 and below, forget I said anything and go cry in the corner.


Parameter: Cost threshold for Parallelism

Default Value: 5

Why is it Bad?
This is the Query Cost where SQL Server Engine will start using parallel plans for your query.  It is the estimated elapsed time in seconds for your query to run if it’ll run serially or use only one CPU. Five seconds is pretty low and only applicable to a totally OLTP application – which is never the case. This will likely cause even simple queries to run in parallel, and you are bound to have even more complex queries in your application. Add that to the fact that you likely have multiple concurrent transactions.  You’d rather have that simple query running serial than have it use all of your CPUs causing more transactions to wait for CPU resources.

What should it be?
A good start is about 20 to 50 and work your way up/down depending on what you see in your application.  If you see a lot of CXPACKET waits on your system together with High CPU usage, I’d consider reviewing this parameter further together with the MAXDOP.

Parameter: fill factor (%)

Default Value: 0

Why is it Bad?
This is the default value for the fill factor if an index is created or rebuilt without specifying the value. It refers to the amount of space the index will use in the leaf-level page when the index is created/rebuilt. So for example 90, means the index will fill up to 90% of the page and leave 10% free.

Now 0 is the same as 100; it means it will fill the page to its capacity. This is bad because unless you have a read-only system, your index is bound to expand/grow when you have insert, updates and/or delete on the table. You should leave some space for this else you’ll end up having too much Page Splits and fragmentation.

What Should It be?
This is a bit tricky and there is no magic number that applies to all. A good start for an OLTP is around 80-90. Test your workload and see what the most appropriate value is for you. If your data doesn’t change, then go crazy and set it to 0/100. Note that this only applies for create/rebuild indexes statement where the fill factor was not specified. If the create/rebuild indexes statement specified the fill factor it will use whatever the value on the statement.


Parameter: max degree of parallelism

Default Value: 0

Why is it Bad?
MAXDOP is the number of CPU that a single query can use. 0 means you are letting SQL Server decide how many processors it can use, and by that it means all available processors (up to 64). This is extra bad when you didn’t set the Cost of Threshold for Parallelism since the default for it is 5. You’ll end up using all your CPUs for each and every query. If you have a lot of concurrent transactions, you’ll end up with high CPU and CXPACKET waits and your users knocking on your desk why the database is so slow.

What Should It be?
This used to be a very lively topic of debate, and you’ll find very good DBAs on both sides of the arguments. But Microsoft already went out with a recommendation so check it here and set yours correctly. It depends on the number of your CPUs and NUMA Nodes.


Parameter: max server memory (MB)

Default Value:  2147483647

Why is it Bad?
That default value means all your Memory. Yes, all, OS be damned. Always set a value for this. Else other applications running on the server and the OS itself might run out of memory.

What should it be?
You’ll see a lot of different advice on this. The most sensible to do is to check the other services running on the server – especially if you have SSIS, SSRS and SSAS also running. Check the memory requirements of these services as well as for the OS and set it accordingly. Leave enough memory for the OS and other services running and let SQL Server use the rest. If you have multiple instances on one server, make sure the sum of the MAX Server Memory Limit for all the instances is in the actual size of the Available Memory on the server. Same goes for Multiple Clustered instance. Compute it as if all the instances are in the same node.


Honorable Mentions

The following parameters are not SQL Server Configuration parameters, but items that you should be setting up correctly to optimize SQL Server performance at the get go.

Storage-> Allocation Unit Size
SQL Server writes in 64KB blocks. The default format for NTFS is 4KB. If you want to maximize the performance of your disks, you should format the storage using 64KB allocation Unit size. Make sure you do this before install since this requires you to format the disks.

Local Security Policy> Local Policy > User Rights Assignment > Lock Pages in Memory
Now it depends if you are in a Physical Server or a VM, so check with your VM host if it’s recommended for them or not. Grant your SQL Service account permission to lock pages in memory. This will reduce memory paging out. But make sure you set your Max Server Memory correctly first.

Local Security Policy> Local Policy > User Rights Assignment > Perform Volume Maintenance Tasks
This permission will let you SQL Server Service Account perform Instant File Initialization. Really handy when creating large database files and expanding them. This lets SQL Server reserve space immediately and use it, instead of the normal process of creating files where the OS needs to fill the file with zeros before SQL Server can use it.

Number of tempdb Files
Now if you are in SQL Server 2016, you can now specify how many tempdb files you want to be created which is sweet since this is the first thing the DBA changes after installation.  As to how many files, you can try and google it but there are two general thoughts – the first one is 1 file is to 1 logical processor, some say .5 file is to 1 logical processor. But everyone agrees the sweet spot is 8. Even if you have more than 8 logical processors the performance gain of having more than 8 files is no longer substantial. So I’d say up to 8 and then depending if you still have contentions on the tempdb that’s the only time to increase it beyond that.

Trace Flags
Now there are a lot of trace flags, and people will recommend different ones depending on how they work and monitor SQL Server. Some people love to dump deadlock reports on errorlog, I’m not a big fan of it. So I won’t list all of them, just the top 3 that makes the most sense. You can add this to the Startup parameters for your SQL Server.

1117 –  This Trace flag forces SQL server to grow all database files in a filegroup when it needs to grow. Particularly useful for tempdb performance. Just make sure the autogrowth settings for your user database files are configured correctly – which you should be checking in the first place.

1118 – This trace flag forces SQL server to allocate all eight pages from the same extent when creating new objects, which improves SGAM contention.

3226 – This suppresses the Backup successful messages in the error log. Believe me when you have a thousand databases and taking log backups every five minutes you’ll love this trace flag. Keeps your error log clean and easier to search. You can check your backup status on the backup job and msdb anyway.

Let me reiterate; as with any advice, make sure to test any changes on your lower environments first and check on the behavior before implementing in production. How about you, what’s your best practice?

Visit our dedicated Microsoft SQL Server web page to find out how we can help with all of your
SQL Server needs.



Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

Lead Microsoft Data Platform Consultant
Pio Balistoy is a Microsoft MVP for Data Platform from Singapore. He has been a Database professional for more than 17 years. He brings his passion for SQL to the community by being one of the Community leads for both Philippine Data Platform Forums (formerly Philippine SQL Server User Group) and Singapore SQL PASS.

5 Comments. Leave new

Richard L Dawson
February 6, 2017 5:18 pm

I do take some exception to this particular blog. Sql Server installations come in all shapes and sizes. For some installations what you recommend is valid and for others not so much. For example your comments about cost for parallelism and Maxdop settings are not what I would do. For example, if I have a query that is taking more than 5 seconds it is already taking too long. So, we leave that one alone. Maxdop on the other hand we set to half the # of cores on the server.

It is important to keep in mind that ANY changes to defaults have to be TESTED before blindly applying them to your production environment. There are so many ways to use Sql Server and so many different ways to do the same thing with Sql Server that the correct answer is always Always ALWAYS, “It Depends.”


Definitely. The point is make sure to get the settings that is applicable to your Application. That’s why I said right from the start and at the end of the post, be sure to test any changes before making any.
You leave CFTP in 5 and adjusted the MAXDOP, which gave you what your application needed. Those two goes hand in hand, and something you should pay attention to when configuring sql server. That’s why they are on the list.

Richard L Dawson
February 10, 2017 5:42 pm

I think we are (as my boss likes to say) “violently agreeing”. Yes, everything must be tested and tested multiple ways. The important thing before, during and after testing is to know WHY you may want to change a default. Or not!


The parallelism threshold of 5 does not mean a query will run for 5 seconds before this is exceeded. The story goes that in NT4 and SQL7 and 300MHz CPUs days (Google:’Nick’s Machine SQL History’) then a Cost Estimate of 5 did relate to about a 5-second query, but today even a 0.5 second query can have an estimate over 5. For years now I have started with a threshold of 20 and tuned from there, but with hyper-convergence coming in then the starting point for that kit should probably be higher.


Right and makes sense, with all the advancements in hardware since then, that 5 seconds ought a be lot faster now. That’s the original definition and technet still defines it the same.

interestingly they already changed the wording for 2016, it is now defined as abstracted unit cost not a unit of time. Likely to take out that confusion.

All the more reason to make sure you evaluate these two values (MAXDOP and CTFP) and configure appropriately for your environment.


Leave a Reply

Your email address will not be published. Required fields are marked *