Configuration ???

As with security configuration, you should make server and database configuration changes with care. Most of the configurable settings in SQL Server work best when left at their default values. You should measure any changes from the defaults for performance and stability impacts in a controlled testing environment before introducing those changes into production.
Like any change, configuration changes should be accompanied by a detailed change log. It’s often tempting to flick the switch on a configuration setting on the assumption that it may improve performance. We then look at other configuration categories: CPU, SQL Server settings, and operating system configuration.

Memory configuration:-
In a SQL Server 2008 can address the amount of memory supported by the underlying operating system. However, like previous versions, 32-bit editions of SQL Server 2008 are constrained to 2GB of RAM unless configured with special settings. Let’s begin memory configuration with a look at 32-bit memory management.

32-bit memory management:-
All 32-bit systems can natively address a maximum of 4GB of memory (232 = 4,294,967,296 bytes). Until recent times, this limitation wasn’t an issue; a quick scan of older documentation reveals terms such as very large when referring to memory beyond 4GB. In today’s terms, systems with 8 or 16GB of RAM are considered normal, making correct memory configuration in 32-bit systems very important in order to derive the maximum performance benefit.
Apart from installing 64-bit versions of Windows and SQL Server, there are two ways of providing SQL Server with more than 2GB of memory; using the /3GB option or using Address Windowing Extensions (AWE) with the /PAE option. /3GB Of the 4GB of RAM that a 32-bit system can natively address, 2GB is reserved by Windows, leaving applications such as SQL Server with a maximum of 2GB. In Windows Server 2003, we used the /3GB option in the boot.ini file to limit Windows to 1GB of memory, enabling SQL Server to access up to 3GB. In Windows Server 2008, we use the BCDEdit command with the increaseuserva option with an optional parameter that determines the size of the available user space, such as 3072 for 3GB.

For 32-bit systems with 4GB of RAM, these options are a good way of squeezing more memory out of Windows for use by SQL Server, but limiting Windows to 1GB of RAM isn’t always trouble free, particularly on systems with a large number of drivers and/or drivers that use a large amount of memory. Depending on the server configuration, these options may actually reduce performance and reliability, so use them with care. For 32-bit systems with more than 4GB of RAM, we can use the /PAE option.

Intel first introduced 36-bit Physical Address Extensions (PAEs) & AWE (Allocate Memory option)


Despite the increased memory that can be accessed with PAE/AWE, there are some limitations when used by SQL Server in 32-bit environments:
** Memory above 4GB accessed using PAE/AWE can only be used by the SQL Server data cache. The procedure cache, used for query compilation plans, isn’t able to take advantage of this memory. We’ll cover the procedure cache in more detail in chapter 17, along with a number of related settings, such as forced parameterization.

** Analysis Services and Integration Services components aren’t able to utilize memory accessed using PAE/AWE.

** Unlike a flat 64-bit environment, there’s some overhead in mapping into the AWE memory space in 32-bit systems. On 32-bit AWE-enabled systems, the service account running the SQL Server service must be given the Lock Pages in Memory right.  As a consequence, AWE memory isn’t paged out to disk by the operating system. As you can see above you assign this right to an account by using the Windows Group Policy Editor. So if the /PAE option allows us to address memory above 4GB and /3GB allows us to get an extra 1GB from Windows below 4GB, then to obtain the maximum amount of memory for SQL Server we should use both, right? Well, maybe Yes or maybe not …/3GB AND /PAE.

When using PAE, Windows uses memory below 4GB to map to memory above 4GB. The more memory above 4GB to map to, the more memory below 4GB is required for the mapping. The magic number is 16GB.


If you do, only 16GB will be addressable, and any additional memory beyond that is wasted. the /3GB option is known to cause stability issues in some circumstances, so even with systems containing between 5GB and 16GB of RAM, you must use this setting with caution. One of the nice things about 64-bit systems is that all of the configuration issues we’ve just covered are no longer of concern.

64-bit memory management:-
Unlike 32-bit systems, 64-bit systems don’t require the memory configuration just described. The full complement of system RAM can be accessed by all SQL Server components without any additional configuration. The one optional memory configuration for 64-bit systems is setting the Lock Pages in Memory right, as covered earlier. While this setting is optional for a 64-bit system, locking pages in memory is beneficial in order to prevent Windows from paging out SQL Server’s memory. If you don’t enable this setting, certain actions such as large file copies can lead to memory pressure with Windows paging, or trimming, SQL Server’s memory. This sometimes leads to a sudden and dramatic reduction in SQL Server performance, usually accompanied by the “A significant part of sql server process memory has been paged out…” message. Setting the Lock Pages in Memory option prevents such incidents from occurring, and is therefore a recommended setting. Note that Windows Server 2008 handles memory trimming a lot better than 2003.

Regardless of the processor platform (32- or 64-bit), one of the important memory configuration tasks is to set the minimum and maximum server memory values.

Setting minimum and maximum memory values:-
When SQL Server starts, it acquires enough memory to initialize, beyond which it acquires and releases memory as required. The minimum and maximum memory values control the upper limit to which SQL Server will acquire memory (maximum), and the point at which it will stop releasing memory back to the operating system (minimum).

The minimum and maximum memory values for an instance can be set in SQL Server Management Studio or by using the sp_configure command. By default, SQL Server’s minimum and maximum memory values are 0 and 2,147,483,647, respectively. The default min/max settings essentially allow SQL Server to cooperate with Windows and other applications by acquiring and releasing memory in conjunction with the memory requirements of other applikations. On small systems with a single SQL Server instance, the default memory values will probably work fine in most cases, but on larger systems we need to give this a bit more thought. Let’s consider a number of cases where setting min/max values is required, beginning with systems that lock pages in memory.

As you know, when the Lock Pages in Memory setting is enabled, SQL Server’s memory will not be paged out to disk. This is clearly good from a SQL Server performance point of view, but consider a case where the maximum memory value isn’t set, and SQL Server is placed under significant load. The default memory settings don’t enforce an upper limit to the memory usage, so SQL Server will continue to consume as much memory as it can get access to, all of which will be locked and therefore will potentially starve other applications (including Windows) of memory. In some cases, this memory starvation effect can lead to system stability issues. In SQL Server 2005 and above, even with the Lock Pages in Memory option enabled, SQL Server will respond to memory pressure by releasing some of its memory back to the operating system. However, depending on the state of the server, it may not be able to release memory quickly enough, again leading to possible stability issues. For the reasons just outlined, systems that use the Lock Pages in Memory option should also set a maximum memory value, leaving enough memory for Windows.


A server containing multiple SQL Server instances needs special consideration when setting min/max memory values. Consider a server with three instances, each of which is configured with the default memory settings. If one of the instances starts up and begins receiving a heavy workload, it will potentially consume all of the available memory. When one of the other instances starts, it will find itself with very little physical memory, and performance will obviously suffer. In such cases, I recommend setting the maximum memory value of each instance to an appropriate level (based on the load profile of each).

On servers in which SQL Server is sharing resources with other applications, setting the minimum memory value helps to prevent situations in which SQL Server struggles to receive adequate memory. Of course, the ideal configuration is one in which the server is dedicated to SQL Server, but this is not always the case, unfortunately. A commonly misunderstood aspect of the minimum memory value is whether or not SQL Server reserves that amount of memory when the instance starts. It doesn’t.
When started, an instance consumes memory dynamically up to the level specified in the maximum memory setting. Depending on the load, the consumed memory may never reach the minimum value. the relationship between a server’s memory capacity and SQL Server’s minimum and maximum memory values.


Configuring memory maximums in a multi-instance cluster is important in ensuring stability during failover situations. You must ensure that the total maximum memory values across all instances in the cluster is less than the total available memory on any one cluster node that the instances may end up running on during node outage. Setting the maximum memory values in such a manner is important to ensure adequate and consistent performance during failover scenarios.

One of the important memory configuration considerations, particularly for 32-bit AWE systems and 64-bit systems that lock pages in memory, is the amount of memory to leave Windows. For example, in a dedicated SQL Server system with 32GB of memory, we’ll obviously want to give SQL Server as much memory as possible, but how much can be safely allocated? Put another way, what should the maximum memory value be set to? Let’s consider what other possible components require RAM:

*** Windows
*** Drivers for host bus adapter (HBA) cards, tape drives, and so forth
*** Antivirus software
*** Backup software
*** Microsoft Operations Manager (MOM) agents, or other monitoring software

The above non–SQL Server components, there are a number of SQL Server objects that use memory from outside of the buffer pool—that is, the memory area defined by the maximum memory setting. Memory for objects such as linked servers, extended stored procedures, and object linking and embedding (OLE) automation objects is allocated from an area commonly called the MemToLeave area.


As you can see, even on a dedicated server, there’s a potentially large number of components vying for memory access, all of which comes from outside the buffer pool, so leaving enough memory is crucial for a smooth-running server. The basic rule of thumb when allocating maximum memory is that the total of each instance’s maximum memory values should be at least 2GB less than the total physical memory installed in the server; however, for some systems, leaving 2GB of memory may not be enough. For systems with 32GB of memory, a commonly used value for Max Server Memory (totaled across all installed instances) is 28GB, leaving 4GB for the operating system and other components that we discussed earlier.

Given the wide variety of possible system configuration and usage, it’s not possible to come up a single best figure for the Max Server Memory value. Determining the best maximum value for an environment is another example of the importance of a load-testing environment configured identically to production, and an accurate test plan that loads the system as per the expected production load. Load testing in such an environment will satisfy expectations of likely production performance, and offers
the opportunity to test various settings and observe the resulting performance. One of the great things about SQL Server, particularly the recent releases, is its selftuning nature. Its default settings, together with its ability to sense and adjust, make the job of a DBA somewhat easier. You’ll see how these attributes apply to CPU configuration.

CPU configuration:-
When an instance of SQL server starts, it’s created as an operating system process. Unlike a simple application that performs a series of serial tasks on a single CPU, SQL Server is a complex application that must support hundreds or even thousands of simultaneous requests. In order to do this, the SQL Server process creates threads. A multithreaded operating system such as Windows allows applications like SQL Server to create multiple threads in order to maximize CPU efficiency and application throughput. Threads are assigned and balanced across the available CPUs in a server. If a thread is waiting on the completion of a task such as a disk request, SQL Server can schedule the execution of other threads in the meantime. The combination of a multithreaded architecture and support for multi-CPU servers allows applications such as SQL Server to support a large number of simultaneous requests. With this background in mind, let’s take a look at some of the configuration options that can be set to control the manner in which SQL Server threads are executed.

Boost SQL Server Priority option:-

Threads created in Windows are assigned a priority from 1 to 31, with thread priority 0 reserved for operating system use. Waiting threads are assigned to CPUs in priority order—that is, higher-priority threads are assigned for execution ahead of lower-priority threads.

By default, SQL Server threads are created with a “normal” priority level of 7. This priority ensures SQL Server threads are assigned and executed in a timely manner without causing any stability issues for anything else running on the server. The Boost SQL Server Priority option runs SQL Server threads at a priority level of 13, higher than most other applications. Although this sounds like a setting that should always be enabled, much like the “turbo” button on old PCs, it should only be used in rare circumstances.

Enabling this option has resulted in problems, such as not being able to shut down the server and various other stability issues. In almost all cases, on well-configured dedicated servers, the performance difference is likely to be negligible at best. Unless you have a very accurate load-testing environment in which you can prove this option results in a measurable performance boost and doesn’t cause any other stability and performance issues, use the default setting.Good luck with confugiration and setting.
This post is already very long. Lets save that future posts in 2014.
Welcome to 2014 and Good bye to 2013, to all readers.

Microsoft Certified Solutions Associate (MCSA)