Monday, September 24, 2007

Performance Tuning SQL Server's Configuration Settings

Unless you know exactly what you are doing and have already performed impartial experiments that prove that making SQL Server configuration changes helps you in your particular environment, do not change any of the SQL Server configuration settings that affect server performance.

SQL Server, in most cases, knows how to configure itself dynamically for optimum performance, and does a good job of it. In some cases, changing SQL Server configuration settings can end up causing more problems than they fix.

If you do decide that changing a specific setting is needed, be sure to test it out thoroughly on a test system before making the change on your production system. [7.0, 2000, 2005] Updated 3-6-2006

*****

The easiest way to view your SQL Server's configuration settings is to run the following command in Query Analyzer:

sp_configure

This will produce a table similar to this one:



The first column, "name," is the name of the SQL Server configuration setting. The second column, "minimum," is the smallest legal value for the setting. The third column, "maximum," is the largest legal value for the setting. The fourth column, "config_value," is what the setting has been set to (but may or may not be what SQL Server is actually running now. Some settings don't go into effect until SQL Server has been restarted, or until the RECONFIGURE WITH OVERRIDE option has been run, as appropriate.) And the last column, "run_value," is the value of the setting currently in effect. If you have not changed any of these values since the last time you restarted SQL Server, then the values in the last two columns will always be the same.

Unfortunately, the default values for these settings are not listed when you run sp_configure. You will have to check with the Books Online to see what all of them all. [7.0, 2000] Added 8-27-2002

*****

Most, but not all, of the SQL Server configuration settings can be changed using Enterprise Manager or Management Studio. But one of the easiest ways to change any of these settings is to use the sp_configure command, like this:

sp_configure ['configuration name'], [configuration setting value]
GO
RECONFIGURE WITH OVERRIDE
GO

Where:

Configuration name = the name of the configuration setting (see the name in the table above). Note that the name must be enclosed in single quote marks (or double quote marks, depending on Query Analyzer's or Management Studio’s configuration).

Configuration setting value = the numeric value of the setting (with no quote marks).

Once sp_configure has run, you must perform one additional step. You must run either the RECONFIGURE (normal settings) or the RECONFIGURE WITH OVERRIDE (used for settings that can get you into trouble if you make a mistake) option, otherwise your setting change will not go into effect. Rather than trying to remember when to use each different version of the RECONFIGURE command, it is easier just to use RECONFIGURE WITH OVERRIDE all the time, as it works with all configuration settings. If you use Enterprise Manager or Management Studio to change a setting, it will execute RECONFIGURE WITH OVERRIDE automatically, so you do not have to.

Once you do this, most, but not all, settings go into effect immediately. For those that do not go into effect after RECONFIGURE, the SQL Server service has to be stopped and restarted. The table above tells you which of the performance-related configuration settings require that the service be restarted once they have been changed.

Before we are finished with this topic, there is one more thing you need to know. Some of the configuration settings are considered "advanced" settings. Before you can change these options using the sp_configure command, you must first change one of the SQL Server configuration settings to allow you to change them. The command to do this is:

sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO

Only after you have run the above code may you now run sp_configure to change an advanced SQL Server configuration setting. The table above tells you which of the performance-related configuration settings are "advanced" settings. [7.0, 2000, 2005] Updated 3-6-2006




If you are running SQL Server 6.5, don't make the most common mistake made by DBAs, and that is to accept the default Memory setting, which is either 8MB or 16MB, depending on how much RAM the physical server has. This number tells SQL Server how much RAM is can use out of the available RAM on NT Server. I have run across many 6.5 SQL Servers with the default memory settings, including one server that had 2GB of RAM.

This number needs to be bumped up manually, as SQL Server 6.5 does not have the ability to dynamically configure memory as does later versions of SQL Server. [6.5]




Although SQL Server is essentially self-tuning, it can take some time for SQL Server to figure out the optimum internal setting, perhaps several hours, depending on the type and level of activity the SQL Server is experiencing.

When the SQL Server service is first started, SQL Server begins with its default configuration settings. But as activity begins in the databases and as queries are run, SQL Server will fine-tune itself and performance will improve with time.

Given this, do not expect to get accurate Performance Monitor results until SQL Server has had a chance to fine-tune itself. Give your databases some time after the mssqlserver service starts before SQL Server fine-tunes itself. [7.0, 2000, 2005] Updated 3-6-2006

*****

If you are running SQL Server 6.5 or SQL Server 7.0, on servers with high-speed disk controllers, you may be able to increase I/O performance by changing the "Max Async IO" SQL Server configuration setting. The default setting is 32, and the maximum setting is 255. You will need to experiment with this setting before you get the ideal setting for your server.

A rule of thumb for coming up with an appropriate "Max Async IO" setting is to multiply by either 2 or 3, the number of physical drives available for simultaneous I/O.

This experimenting should not be done on a production server as you must restart SQL Server each time you make a change. If you set this figure too high, you may start getting a backlog of disk requests, which will increase the Performance Monitor counter Physical Disk Object: Avg. Disk Queue Length to over two per physical drive, which indicates an I/O bottleneck. This option is not available for SQL Server 2000. [6.5, 7.0]

*****

If you have a very active OLTP server application with many INSERTS, UPDATES, and DELETES, it is possible that the default "recovery interval" of 0 (which means that SQL Server determines the appropriate recovery interval) may not be appropriate. If you are watching the performance of your server with the Performance Monitor and notice that you have regular periods of 100% disk write activity (occurring during the checkpoint process), you may want to set the "recovery interval" to a higher number, such as 5 or 10. This figure refers to the maximum number of minutes it will take SQL Server to perform a recovery after it is restarted. The default figure of 0, in effect, works to a maximum recovery period of about 1 minute.

Another potential reason to use this "recovery interval" option is if the server is devoted to OLAP or a data warehouse. In these instances, these mostly read-only databases generally don't benefit from a short recovery interval.

If your server does not match any of the above suggestions, then leaving the default value it generally the best choice.

By extending the checkpoint time, you reduce the number of times SQL Server performs a checkpoint, and if effect, reduces some of SQL Server's overhead. You may need to experiment with this figure in order to find the ideal compromise between performance and the time it takes for SQL Server to perform a recovery when it is restarted.

Ideally, you want to keep the recovery interval as small as possible in order to reduce the amount of time it takes to restart the mssqlserver service. Because each time the mssqlserver service starts, it goes through an automatic recovery process and the larger the "recovery interval," the longer the recover process will take. You must decide what is the best compromise in performance and recovery time that best fits your needs. [6.5, 7.0, 2000, 2005] Updated 8-21-2006




The default network packet size for clients communicating with SQL Server is 4096 bytes. In most cases, this is the overall best setting. But in some special cases, this SQL Server configuration setting should be changed. If the client regularly transfers large amounts of text or image data to SQL Server, or performs large BCP or DTS operations, then increasing the default size may boost performance because the number of network packets is reduced. On the other hand, if your application only sends and receives small amounts of data, then a smaller packet size can boost responsiveness.

Keep in mind that client software can overrule this setting in SQL Server. Only try changing this setting if you are very familiar with network traffic analysis and have the ability to test the performance, before and after the change. [6.5, 7.0, 2000, 2005] Updated 3-6-2006

*****

If your server has plenty of physical RAM, and you run many memory-intensive queries that make extensive use of hash and sort operations, and there are many queries running concurrently on the server, consider boosting the "min memory per query" SQL Server configuration option from its default of 1024K, to a higher value. This setting tells SQL Server to allocate automatically, at a minimum, the amount of memory set in this configuration setting for running each query. Increasing this setting can boost the performance of memory-intensive queries.

On the other hand, if you increase this value too high, queries could get slower. This is because SQL Server will not run the query until it can allocate the amount of memory specified in this setting, or until the value specified in the "query wait" SQL Server configuration option is exceeded.

You will want to test this setting with several different values until you come up with the ideal setting for your particular SQL Server environment. [7.0, 2000, 2005] Updated 3-6-2006

*****

The "max degree of parallelism" option allows you to specify if parallelism is turned on, turned off, or only turned on for some CPUs, but not for all CPUs in your server. Parallelism refers to the ability of the Query Optimizer to use more than a single CPU to execute a single query. By default, parallelism is turned on and can use as many CPUs as there are in the server. (Unless this has been reduced due to the affinity mask option). If your server has only one CPU, the "max degree of parallelism" value is ignored.

The default for this option is "0", which means that parallelism is turned on for all available CPUs. If you change this setting to "1", then parallelism is turned off for all CPUs. Also, this option allows you to specify how many CPUs can be used for parallelism. For example, if your server has 8 CPUs and you only want parallelism to run on 4 of them, you can specify a value of 4 for this option. Although this option is available, it is doubtful if using it would really provide any performance benefits.

If parallelism is turned on, as it is by default if you have multiple CPUs, then the query optimizer will evaluate each query for the possibility of using parallelism, which takes a little overhead. On many OLTP servers, the nature of the queries being run often doesn't lend itself to using parallelism for running queries. Examples of this include the INSERT, UPDATE, and DELETE statements. Because of this, the query optimizer is wasting its time evaluating each query to see if it can take advantage of parallelism. If you know that your queries will probably rarely or never need the advantage of parallelism, you can save a little overhead by turning this feature off, so queries aren't evaluated for this.

Of course, if the nature of the queries that are run on your SQL Server can take advantage of parallelism, you will not want to turn parallelism off. For example, if your OLTP server runs many correlated subqueries, or other complex queries, then you will probably want to leave parallelism on. You will want to test this setting to see if making this particular change will help, or hurt, your SQL Server's performance in your unique operating environment. [7.0, 2000, 2005] Updated 8-21-2006




Using parallelism to execute a SQL Server query has its costs. This is because it takes additional overhead to run a query in parallel than to run it serially. But if the benefits of running a query using parallelism is higher than the costs, then using parallelism is a good thing. In most cases this is true, but not in all cases.

As a rule of thumb, if a query can run serially very fast, there is no point in even considering parallelism for the query, as the extra time required to create a query plan using parallelism could be longer than the time it takes to run the query serially.

By default, if the Query Optimizer determines that a query will take less than 5 seconds to execute, parallelism is not considered by SQL Server. This 5-second figure can be modified using the "cost threshold for parallelism" SQL Server option. You can change this value anywhere from 0 to 32767 seconds. So if you set this value to 10, this means that the Query Optimizer won't consider parallelism for any query that it thinks will take less than 10 seconds to run.

One possibility where a smaller value than 5 seconds might be useful is in cases where SQL Server is acting as a data warehouse and many very complex queries are being run. A lower value will allow the Query Optimizer to use parallelism more often, which can help in some situations.

One possibility where increasing this value may help performance is in situation where you have identified many queries are running with parallelism, and in addition, the CPU utilization is very high, creating a CPU bottleneck. In this case, you may want to consider increasing the default value from 5 seconds to a larger number, thus reducing the number of parallel queries being run, also reducing the overall usage of your server's CPUs, which may help the overall performance of your server.

You will want to test changes to this default value thoroughly before implementing it on your production servers. And in most cases, other than the exceptions listed above, you should not change this setting. [7.0, 2000, 2005] Updated 8-21-2006




By default, the SQL Server processes run at the same priority as any other applications on a server. In other words, no single application process has a higher priority than any other one when it comes to getting and receiving CPU cycles.

The "priority boost" configuration option allows you to change this. The default value for this option is "0", which means that the priority of SQL Server processes is the same as all other application processes. If you change this setting to "1", then SQL Server now has a higher priority than other application processes. In essence, this means that SQL Server has first priority to CPU cycles over other application processes running on the same server. But does this really boost performance of SQL Server?

Let's look at a couple of scenarios. First, let's assume your server runs not only SQL Server, but other apps (not recommended for best performance, but a real-world possibility), and that there is plenty of CPU power available. If this is the case, and if you give SQL Server a priority boost, what happens? No much. If there is plenty of CPU power available, a priority boost doesn't mean much. Sure, SQL Server might gain a few milliseconds here and there as compared to the other applications, but I doubt if you would be able to notice the difference.

Now let's look at a similar scenario as above, but let's assume that CPU power is virtually all exhausted. If this is the case, and SQL Server is given a priority boost, sure, SQL Server will now get its work done faster, but only at the cost of slowing down the other applications. If this is what you want, OK. But a better solution would be to boost CPU power on the server, or reduce the server's load.

But what if SQL Server is running on a dedicated server with no other applications and if there is plenty of excess CPU power available? In this case, boosting the priority will not gain a thing, as there is nothing competing (other than the operating system) for CPU cycles, and besides, there are plenty of extra cycles to go around.

And last of all, if SQL Server is on a dedicated server, and the CPU is maxed out, giving it a priority boost is a zero sum game as parts of the operating system could potentially be negatively affected if you do this. And the gain, if any, will be very little for SQL Server.

As you can see, this option is not worth the effort. In fact, Microsoft has documented several problems related to using this option, which makes this option even less desirable to try. [7.0, 2000, 2005] Updated 8-21-2006

*****

If your server is dedicated to SQL Server 7.0, and the server is running more than one CPU, then consider modifying the "time slice" SQL Server configuration setting. This setting determines the amount of time (in milliseconds) that a user process can run without being terminated by SQL Server. Normally, a user process determines for itself how much CPU time it should use, and schedules itself appropriately. But if the amount of time the process runs exceeds the "time slice" setting, then SQL Server assumes that the process is stuck and then terminates it.

The default "time slice" setting is 100 milliseconds. If you set the "time slice" to low, it can slow down the system because of the extra overhead caused by processes having to reschedule themselves if they are terminated because they take too long. If the "time slice" is too high, it can cause "stuck" user processes to waste CPU time.

One factor that affects how this setting should be set is the speed of the CPUs in the server. If your CPUs run at 400MHz or less, then you should probably not change this setting. If your CPUs run from 400-600MHz, then consider boosting the "time slice" to 200 milliseconds. If your CPUs run over 600MHz, then consider boosting the time slice" to 300 milliseconds. Before using this setting in production, you will want to test your server's performance before and after the change to see if SQL Server benefited from the change. [7.0] Updated 6-7-2004




Starting with Service Pack 2 for SQL Server 7.0, and including SQL Server 2000, there is a SQL Server startup switch that affects how SQL Server allocates memory when it starts up. This switch, /g, is used to tell SQL Server how much address space to reserve for the non-buffer pool component of SQL. The non-buffer pool is used by SQL Server for executable code or for large memory allocations that need space in chunks larger than 8KB, such as large stored procedures and query plans.

In SQL Server 7.0, the default address space allocated to the non-buffer pool is 128MB. For SQL Server 2000, the default amount is 256MB.

Microsoft only recommends using this switch if your server has large amounts of RAM (2GB or greater) and if you regularly get this error message in the SQL Server Error Log:

Warning: Clearing procedure cache to free contiguous memory.

[7.0, 2000] Updated 8-17-2003




For best SQL Server performance, you want to dedicate your SQL Servers to only running SQL Server, not other applications. And in most cases, the settings for the "maximum server memory" and the "minimum server memory" should be left to their default values. This is because the default values allow SQL Server to dynamically allocate memory in the server for the best overall optimum performance. If you "hard code" a minimum or maximum memory setting, you risk hurting SQL Server's performance.

On the other hand, if SQL Server cannot be dedicated to its own physical server (other applications run on the same physical server along with SQL Server) you might want to consider changing either the minimum or maximum memory values, although this is generally not required, and I don't generally recommend it.

Let's take a closer look at each of these two settings.

The "maximum server memory" setting, when set to the default, tells SQL Server to manage the use of memory dynamically, and if it needs it, to use as much RAM as is available (while leaving some memory for the operating system).

If you want SQL Server to not use all of the available RAM in the server, you can manually set the maximum amount of memory SQL Server can use by specifying a specific number that is between 4 (the lowest number you can enter) to the maximum amount of RAM in your server (but don't allocate all the RAM in SQL Server, as the operating system needs some RAM too).

When "maximum server memory" is set to the default value, as mentioned before, memory use is adjusted dynamically. What this also means is that if you are running other applications other than SQL Server on a physical server, that SQL Server "may play nice" and give up some of its memory if other applications need the use of some. (Note: Above, I say "may play nice". This is because SQL Server will not give up RAM if it needs it. But if it has allocated the RAM, and it doesn't need it for a long period of time, then it will give up some of the RAM it has grabbed it if is not being used.)

So in most cases, there is no reason to change this setting from its default value. Only in rare occasions when SQL Server doesn't appear to "play nice," or when you want to artificially keep SQL Server from using all of the RAM available to it, would you want to change the default value. For example, if your "other" application(s) are more important than SQL Server's performance, then you can restrain SQL Server's performance if you want.

There are also two potentially performance issues you can create if you attempt to set the "maximum server memory" setting manually. First, if you allocate too much memory to SQL Server and not enough for the operating system, then the operating system may have no choice but to begin excessive paging, which will slow performance of your server. Also, if you are using the Full-Text Search service, you must also leave plenty of memory for its use. Its memory is not dynamically allocated like the rest of SQL Server's memory, and there must be enough available memory for it to run properly.

The "min server memory" setting, when set to the default value of 0 (in MB), tells SQL Server to manage the use of memory dynamically. This means that SQL Server will start allocating memory as is needed, and the minimum amount of RAM used can vary as SQL Server's needs vary.

If you change the "min server memory" setting to a value other than the default value of 0, what this means is not that SQL Server will automatically begin using this amount of minimum memory automatically, as many people assume, but that once the minimum amount is reached (because it is needed) that the minimum amount specified will never go down below the specified minimum.

For example, if you specify a minimum value of 100 (MB), then restart SQL Server, SQL Server will not immediately reserve 100 MB of RAM for its minimal use. Instead, SQL Server will only take as much as it needs. If it never needs 100MB, then it will never be reserved. But if SQL Server does exceed the 100 MB amount specified, then later it doesn't need it, then this 100 MB will then become the bottom limit of how much memory SQL Server allocates. Because of this behavior, there is little reason to change the "min server memory" setting to any value other than its default value.

If your SQL Server is dedicated, there is no reason to use the "min server memory" setting at all. If you are running other applications on the same server as SQL Server, there might be a very small benefit of changing this setting to a minimum figure, but it would be hard to determine what this value should be, and the overall performance benefits would most likely be negligible. [7.0, 2000, 2005] Updated 6-7-2004

*****

When SQL Server is run under Windows NT 4.0 or Windows 2000/2003, a SQL Server thread can move from one CPU to another. This feature allows SQL Server to run multiple threads at the same time, generally resulting in better load balancing among the CPUs in the server. The only downside to this process is that each time a thread moves from one CPU to another, the processor cache has to be reloaded, which can hurt performance in some cases.

In cases of heavily-loaded servers with more than 4 CPUs, performance can be boosted by specifying (to a limited degree) which processor should run a specific thread. This reduces the number of times that the processor cache has to be reloaded, helping to eek out a little more performance of the server. For example, you can specify that SQL Server will only use some of the CPUs, not all of them available to it in a server.

The default value for the affinity mask setting, which is "0," tells SQL Server to allow the Windows Scheduling algorithm to set a thread's affinity. In other words, the operating system, not SQL Server, determines which threads run on which CPU, and when to move a thread from one CPU to another CPU. In any server with 4 or less CPUs, the default value is the best overall setting. And for servers with more than 4 CPUs, and that are not overly busy, the default value is also the best overall setting for optimum performance.

But for servers with more than 4 CPUs, and heavily loaded because of one or more non-SQL Server applications running on the same server as SQL Server, then you might want to consider changing the default value for the affinity mask option to a more appropriate value. Please note that if SQL Server is the only application running on the server, then using the affinity mask to limit CPU use could hurt performance, not help it.

For example, let's say you have a server that is running SQL Server, COM+ objects and IIS. Let's also assume that the server has 8 CPUS and is very busy. By reducing the number of CPUs that can run SQL Server from 8 to 4, what will happen is that SQL Server threads will now only run on 4 CPUs, not 8 CPUs. This will reduce the number of times that a SQL Server thread can jump CPUs, reducing how often the processor cache as to be reloaded, helping to reduce CPU overhead and boosting performance somewhat. The remaining 4 CPUs will be used by the operating system to run the non-SQL Server applications, helping them also to reduce thread movement and boosting performance.

For example, if you have an 8 CPU system, the value you would use in the sp_configure command to select which CPUs that SQL Server should only run on are listed below:

Decimal Value
Allow SQL Server Threads on These Processors

1
0

3
0 and 1

7
0, 1, and 2

15
0, 1, 2, and 3

31
0, 1, 2, 3, and 4

63
0, 1, 2, 3, 4, and 5

127
0, 1, 2, 3, 4, 5, and 6


Specifying the appropriate affinity mask is not an easy job, and you should consult the SQL Server Books Online before doing so for additional information. Also, you should test what happens to your SQL Server's performance before and after you many any changes to see if the value you have selected hurts or helps performance. Other than trial and error, there is no easy way to determine the optimum affinity mask value for your particular server. [7.0, 2000, 2005] Updated 6-7-2004




If you are using SQL Server 2000 Standard Edition under Windows NT 4.0 or Windows 2000/2003 (any version), or are running SQL Server 2000 Enterprise Edition under Windows NT 4.0 or Windows 2000 Server, or if your server has less than 4GB or RAM, the "awe enabled" option should always be left to the default value of 0, which means that AWE memory is not being used.

The AWE (Advanced Windowing Extensions) API allows applications (that are written to use the AWE API) to run under Windows 2000 Advanced Server or Windows 2000 Datacenter Server to access more than 4GB of RAM. SQL Server 2000 Enterprise Edition (not SQL Server 2000 Standard Edition) is AWE-enabled and can take advantage of RAM in a server over 4GB. If the operating system is Windows 2000 Advanced Server, SQL Server 2000 Enterprise Edition can us up to 8GB of RAM. If the operating system is Windows 2000 Datacenter Server, SQL Server 2000 Enterprise can use up to 64GB of RAM.

By default, if a physical server has more than 4GB of RAM, Windows 2000 (Advanced and Datacenter), along with SQL Server 2000 Enterprise Edition, cannot access any RAM greater than 4GB. In order for the operating system and SQL Server 2000 Enterprise Edition to take advantage of the additional RAM, two steps have to be completed.

Exactly how you configure AWE memory support depends on how much RAM your server has. Essentially, to configure Windows 2000 (Advanced or Datacenter), you must enter one of the following switches in the boot line of the boot.ini file, and reboot the server:

4GB RAM: /3GB (AWE support is not used)
8GB RAM: /3GB /PAE
16GB RAM: /3GB /PAE
16GB + RAM: /PAE
The /3GB switch is used to tell SQL Server to take advantage of 3GB out of the base 4GB of RAM that Windows 2000 supports natively. If you don't specify this option, then SQL Server will only take advantage of 2GB of the first 4GB of RAM in the server, essentially wasting 1GB of RAM.

AWE memory technology is used only for the RAM that exceeds the base 4GB of RAM, that's why the /3GB switch is needed to use as much of the RAM in your server as possible. If your server has 16GB or less of RAM, then using the /3GB switch is important. But if your server has more than 16GB of RAM, then you must not use the /3GB switch. The reason for this is because the 1GB of additional RAM provided by adding the /3GB switch is needed by the operating system in order to take advantage of all of the extra AWE memory. In other words, the operating system needs 2GB or RAM itself to mange the AWE memory if your server has more than 16GB of RAM. If 16GB or less of RAM is in a server, then the operating system only needs 1GB of RAM, allowing the other 1GB of RAM for use by SQL Server.

Once this step is done, the next step is to set the "awe enabled" option to 1, and then restart the SQL Server service. Only at this point will SQL Server be able to use the additional RAM in the server.

One caution about using the "awe enabled" setting is that after turning it on, SQL Server no longer dynamically manages memory. Instead, it takes all of the available RAM (except about 128MB which is left for the operating system). If you want to prevent SQL Server from taking all of the RAM, you must set the "max server memory" option (described in more detail later in this article) to a figure that limits SQL Server to the amount or RAM you specify. (7.0, 2000) Updated 6-7-2004




If your SQL Server does not use cursors, or uses them very little, then this setting should never be changed from its default value of "-1".

A "cursor threshold" of "-1" tells SQL Server to execute all cursors synchronously, which is the ideal setting if the result sets of cursors executed on your server are not large. But if many, or all of the cursors running on your SQL Server produce very large result sets, then executing cursors synchronously is not the most efficient way to execute a cursor.

The "cursor threshold" setting has two other options (besides the default) for running large cursors. A setting of "0" tells SQL Server to run all cursors asynchronously, which is more efficient if most or all of the cursor's result sets are large.

What if some of the cursor result sets are small and some are large, then what do you do? In this case, you can decide what large and small is, and then use this number as the cutoff point for SQL Server. For example, let's say that we consider any cursor resultset of under 1000 rows as small, and any cursor resultset of over 1000 rows as large. If this is the case, we can set the "cursor threshold" to 1000.

When the "cursor threshold" is set to 1000, what happens is that if the Query Optimizer predicts that the result set will be less than 1000, then the cursor will be run synchronously. And if the Query Optimizer predicts that the result set will be more than 1000, then the cursor will be run asynchronously.

In many ways, this option provides the best of both worlds. The only problem is what is the ideal "cursor threshold"? To determine this, you will need to test. But as you might expect, the default value if often the best, and you should only change this option if you know for sure that your application uses very large cursors and that you have tested this option and know for sure that by changing it, it helps, not hurts performance. [7.0, 2000] Updated 1-28-2005




The fill factor (%) SQL Server configuration option allows you to change the default fill factor for indexes when they are built. By default, the fill factor setting is set to "0". A setting of "0" is somewhat confusing, as what it means is that leaf index pages are filled 100% (not 0%), and that intermediate index pages (non-leaf pages) have some space left in them (they are not filled up 100%). Legal settings for the fill factor range from 0 through 100.

The default fill factor only comes into play when you build new indexes without specifying a specific fill factor. If you do specify a fill factor when you create a new index, that value is used, not the default fill factor.

In most cases, it is best to leave the default fill factor alone, and if you want a value other than the default fill factor, then specify it when you create an index. [7.0, 2000, 2005] Updated 1-28-2005

*****

The index create memory setting determines how much memory can be used by SQL Server for index creating sorts. The default value of "0" tells SQL Server to automatically determine the ideal value. In almost all cases, SQL Server will configure the amount of memory optimally.

But in unusual cases, especially with very large tables, it is possible for SQL Server to make a mistake, causing large indexes to be created very slowly, or not at all. If you run into this situation, you may want to consider setting the Index Create Memory setting yourself, although you will have to trial and error the setting until you find the optimum one for your situation. Legal settings for this option run from 704 to 2147483647. This number refers to the amount of RAM, in KB, that SQL Server can devote to creating the index.

Keep in mind that if you do change the setting, that this memory will then be allocated for index creation and will not be available for other use. If your server has more than enough RAM, then this will be no problem. But if your server is short on RAM, changing this setting could negatively affect the performance of other aspects of SQL Server. You might consider making this change only when you are creating or rebuilding large indexes, and leave the setting to the default all other times. [7.0, 2000] Updated 1-28-2005

*****

SQL Server 7.0 and 2000, by default, run in what is called "thread mode." What this means is that SQL Server uses what are called UMS (User Mode Schedulers) threads to run user processes. SQL Server will create one UMS thread per processor, with each one taking turns running the many user processes found on a busy SQL Server. For optimum efficiency, the UMS attempts to balance the number of user processes run by each thread, which in effect tries to evenly balance all of the user processes over all the CPUs in the server.

SQL Server also has an optional mode it can run in, called fiber mode. In this case, SQL Server uses one thread per processor (like thread mode), but the difference is that multiple fibers are run within each thread. Fibers are used to assume the identity of the thread they are executing and are non-preemptive to other SQL Server threads running on the server. Think of a fiber as a "lightweight thread," which, under certain circumstances, takes less overhead than standard UMS threads to manage. Fiber mode is turned on and off using the "lightweight pooling" SQL Server configuration option. The default value is "0", which means that fiber mode is turned off.

So what does all this mean? Like everything, there are pros and cons to running in one mode over another. Generally speaking, fiber mode may" be beneficial when all of the following circumstances exist:

Eight or CPUs are found on the server (the more the CPUs, the larger the benefit).
All of the CPUS are running near maximum (95-100%) most of the time.
There is a lot of context switching occurring on the server (as reported by the Performance Monitor System Object: Context Switches/sec. Generally speaking, more than 20,000 context switches per second is considered high.
The server is making little or no use of distributed queries or extended stored procedures.
If all the above are true, then turning on "lightweight pooling," option in SQL Server may see a small boost in performance.

But if the four circumstances are all not true, then turning on "lightweight pooling" could actually degrade performance. For example, if your server makes use of many distributed queries or extended stored procedures, then turning on "lightweight pooling" will definitely cause a problem because they cannot make use of fibers, which means that SQL Server will have to switch back-and-forth from fiber mode to thread mode as needed, which hurts SQL Server's performance.

You will also want to carefully test (before and after) the affect of this setting on your server's performance. While in test scenarios, some DBAs have noticed a boost in performance, in the real world, the benefit is much harder to quantify and identify.

Generally speaking, this option should only be tried in cases where you have tried every other option to boost performance, and there are no more to try.

This option is available under the SQL Server's "Properties", "Processor" tab, in addition to using sp_configure. [7.0, 2000] Updated 8-17-2003




Each time SQL Server locks a record, the lock must be stored in memory. By default, the value for the "locks" option is "0", which means that lock memory is dynamically managed by SQL Server. Internally, SQL Server can reserve from 2% to 40% of available memory for locks. In addition, if SQL Server determines that allocating additional memory for locking could cause paging at the operating system level, it will not allocate the memory to locks, instead giving it up to the operating system.

In almost all cases, you should allow SQL Server to dynamically manage locks, leaving the default value as it. If you enter your own value for lock memory (legal values are from 5000 to 2147483647 KB), then SQL Server cannot dynamically manage this portion of memory, which could cause some other areas of SQL Server to experience poor performance.

If you get an error message that says you have exceeded the maximum number of locks available, you have these options:

Closely examine your queries to see if they are causing excessive locking. If they are, it is possible that performance is also being hurt because of a lack of concurrency in your application. It is better to fix bad queries than it is to allocate too much memory to tracking locks.
Reduce the number of applications running on this server.
Add more RAM to your server.
Boost the number of locks to a high value (based on trial and error). This is the least desirable option as giving memory to locks prevents it from being used by SQL Server for more beneficial purposes.
[7.0, 2000] Updated 1-28-2005




The "max text repl size" setting is used to specify the maximum size of text or image data that can be inserted into a replicated column in a single physical INSERT, UPDATE, WRITETEXT, or UPDATETEXT transaction. If you don't use replication, or if you don't replicate text or image data, then this setting should not be changed.

The default value is 65536, the minimum value is 0, and the maximum value is 2147483647 (in bytes). If you do heavy replication of text or image data, you might want to consider increasing this value only if the size of this data exceeds 64K. But as with most of these settings, you will have to experiment with various values to see what works best for your particular circumstances. [7.0, 2000] Updated 1-28-2005

*****

The "max worker threads" SQL Server configuration setting is used is used to determine how many worker threads are made available to the sqlservr.exe process from the operating system. The default value is 255 worker threads for this setting. SQL Server itself uses some threads, but they will be ignored for this discussion. The focus here is on threads created for the benefit of users.

If there are more than 255 user connections, then SQL Server will use thread pooling, where more than one user connection shares a single worker thread. Although thread pooling reduces the amount of system resources used by SQL Server, it can also increase contention between the user connections for access to SQL Server, hurting performance.

To find out how many worker threads your SQL Server is using, check the number of connections that are currently made to your server using Enterprise Manager. For each SQL Server connection, there is one worker thread created, up to the total number of worker threads that are specified in the "max worker threads" settings. For example, if there are 100 connections, then 100 worker threads would be employed. But if there are 500 connections, but only 255 worker threads are available, then only 255 worker threads are being used, with connections sharing the limited worker threads.

Assuming there is enough RAM in your server, for best performance, you will want to set the "max worker threads" setting to a value equal to the maximum number of user connections your server ever experiences, plus 5. But there are some limitations to this general recommendation, as we will soon see.

As has already been mentioned, the default value for the "max worker threads" is 255. If your server will never experience over 255 connections, then this setting should not be changed from its default value. This is because worker threads are only created when needed. If there are only 50 connections to the server, there will only be that many worker threads, not 255 (the default value).

If you generally have over 255 connections to your server, and if "max worker threads" is set to the default value of 255, what will happen is that SQL Server will begin thread pooling. This means that a single thread will be responsible for more than one connection. Now comes the dilemma. If you increase the "max worker threads" so that there is one thread for each connection, SQL Server will take up additional resources (mostly memory). If you have plenty of RAM in your server that is not being used by SQL Server or any other application, then boosting the "max worker threads" can help boost the performance of SQL Server.

But if you don't have any extra RAM available, then adding more worker threads can hurt SQL Server's performance. In this case, allowing SQL Server to use thread pooling offers better performance. This is because thread pooling uses less resources than does not using it. But, on the downside, thread pooling can introduce problems of resource contention between connections. For example, two connections sharing a thread can conflict when both connections want to perform some task as the exact same time (which can't be done because a single thread can only service a single connection at the same time).

So what do you do? In brief, if your server normally has less than 255 connections, leave this setting at its default value. If your server has more than 255 connections, and if you have extra RAM, then consider bumping up the "max worker threads" setting to the number of connections plus 5. But if you don't have any extra RAM, then leave the setting to its default value. For SQL Server with thousands of connections, you will have to experiment to find that fine line between extra resources used by additional worker threads and contention between connections all fighting for the same worker thread.

As you might expect, before using this setting in production, you will want to test your server's performance before and after the change to see if SQL Server benefited, or was hurt, from the change. [7.0, 2000] Updated 1-28-2005

*****

When a query runs, SQL Server does its best to allocate the optimum amount of memory for it to run efficiently and quickly. By default, the "minimum memory per query" setting allocates 1024 KB, as a minimum, for each query to run. The "minimum memory per query" setting can be set from 0 to 2147483647 KB.

If a query needs more memory to run efficiently, and if it is available, then SQL Server automatically assigns more memory to the query. Because of this, changing the value of the "minimum memory per query" default setting is generally not advised.

In some cases, if your SQL Server has more RAM than it needs to run efficiently, the performance of some queries can be boosted if you increase the "minimum memory per query" setting to a higher value, such as 2048 KB, or perhaps a little higher. As long as there is "excess" memory available in the server (essentially, RAM that is not being used by SQL Server), then boosting this setting can help overall SQL Server performance. But if there is no excess memory available, increasing the amount of memory for this setting is more likely to hurt overall performance, not help it. [7.0, 2000] Updated 2-25-2005




This configuration option affects performance, but not in the conventional way. By default, the "nested triggers" option is set to the default value of "1". This means that nested triggers (a nested trigger is a trigger that cascades up to a maximum limit of 32) can be run. If you change this setting to "0", then nested triggers are not permitted. Obviously, by not allowing nested triggers, overall performance can be improved, but at the cost of application flexibility.

This setting should be left to its default value, unless you want to prevent developers from using nested triggers. [7.0, 2000] Updated 2-25-2005

*****

"Open objects" refers to the total number of objects (such as tables, views, rules, defaults, triggers, and stored procedures) that can be open at the same time in SQL Server. The default setting for this option, which is "0", tells SQL Server to dynamically increase or decrease this number in order to obtain the best overall performance of the server.

In rare cases, generally when physical server memory is fully used, it is possible to get a message telling you that you have exceeded the number of open objects available. The best solution to this is to increase the server's memory, or to reduce the load on the server, such as reducing the number of databases maintained on the server.

If neither of the above options are practical, you can manually configure the maximum number of available open objects by setting the "open objects" value to an appropriately high enough setting. The problem with this is twofold. First, determining the proper value will take trial and error. Second, any memory allocated to open objects will be taken away from other SQL Server needs, hurting the server's overall performance. Sure, now your application will run when you change this setting, but it could run slower if there are current memory pressures. Avoid changing this setting. [7.0, 2000] Updated 2-25-2005

*****

The "query governor cost limit" option allows you to limit the maximum length a query can run on a server, and is one of the few SQL Server configuration options that I endorse. For example, let's say that some of the users of your server like to run very long-running queries that really hurt the performance of your server. By setting this option, you could prevent them from running any queries that exceeded, say 300 seconds (or whatever number you pick). The default value for this setting is "0", which means that there are no limits to how long a query can run.

The value you set for this option is approximate, and is based on how long the Query Optimizer estimates the query will run. If the estimate is more than the time you have specified, the query won't run at all, producing an error instead. This can save a lot of valuable server resources.

On the other hand, users can get real unhappy with you if they can't run the queries they have to run in order to do their job. What you might consider doing is helping those users to write more efficient queries. That way, everyone will be happy.

If this setting is set to "0", consider adding a value here and see what happens. Just don't make it too small. You might consider starting with value of about 600 seconds and see what happens. If that is OK, then try 500 seconds, and so on, until you find out when users start complaining. [7.0, 2000] Updated 2-25-2005

*****

If SQL Server is very busy and is hurting for memory resources, it will queue what it considers memory-intensive queries (those that use sorting or hashing) until there is enough memory available to run them. In some cases, there just isn't enough memory to run them and they eventually time out, producing an error message. By default, a query will time out after a time period equal to 25 times the estimated amount of time the Query Optimizer thinks it will take for the query to run.

The best solution for such a problem is to add more memory to the server, or to reduce its load. But if that can't be done, one option, although fraught with problems of its own, is to use the "query wait" configuration option. The default setting for this option is "-1", which waits the time period described above. If you want the time out period to be greater so that queries won't time out, you can set the "query wait" time to a large enough number. As you might guess, you will have to determine this time out number yourself through trial and error.

The problem with using this option is that a transaction with the intensive query may be holding locks, which can cause deadlock or other locking contention problems, which in the end may a bigger problem than the query timing out. Because of this, this option is not recommended to be changed. [7.0, 2000] Updated 2-25-2005

*****

SQL Server has the ability, if properly configured, to look for stored procedures to run automatically when the mssqlserver service starts. This can be handy if you want a particular action to occur on startup, such as the loading of a specific stored procedure into cache so that it is already there when users begin accessing the server.

By default, the "scan for startup procs" is set to "0", which means that a scan for stored procedures is not done at startup. If you don't have any startup stored procedures, then this is the obvious setting. There is no point spending resources looking for stored procedures that don't exist.

But if you do have one or more stored procedures you want to execute on server startup, then this option has to be set to "1", which turns on the startup scan. [7.0, 2000] Updated 4-4-2005

*****

The "set working set size" option is used when you want to fix the minimum and maximum sizes of the amount of memory that is to be used by SQL Server when it starts. This option also prevents any OS page swapping.

By default, this setting is set to "0", which means that this option is not used. To turn on this option, it must be set to "1", plus, the minimum server memory size and the maximum memory sizes must be set to the same value. This is the value used to reserve the working set size.

As with most options, this one should not generally be necessary. The only time you might want to consider it is if the server is dedicated to SQL Server, has a very heavy load, and has sufficient memory available. Even then, any performance boost gained will be minimal, and you risk the potential of not leaving enough memory to the operating system. Testing is key to the successful use of this option. [7.0, 2000] Updated 4-4-2005

No comments: