Performance Audit Checklist

 Counter Name    Average   Minimum   Maximum 
 Memory: Pages/sec       
 Memory: Available Bytes      
 Physical Disk: % Disk time      
 Physical Disk: Avg. Disk Queue Length       
 Processor: % Processor Time       
 System: Processor Queue Length      
 SQL Server Buffer: Buffer Cache Hit Ratio       
 SQL Server General: User Connections   

Enter your results in the table above.
Use Performance Monitor to Help Identify SQL Server Hardware Bottlenecks
essentially taken an inventory of how your SQL Server’s hardware, operating system, SQL Server configuration, and database configuration are set up. These have been easy tasks, as all we essentially had to do is to look at the data and record it.

In this article, we will begin our first “investigative” work. We will be using System Monitor (also called Performance Monitor) to help us identify potential hardware bottlenecks. By monitoring a few key counters over a 24-hour period, you should get a good feel for any major hardware bottlenecks your SQL Server is experiencing. By identifying hardware bottlenecks, we will be in a better position to identify potentially obvious performance problems.

Ideally, you should use System Monitor to create a log of key counters for a period of 24 hours. You will want to select a “typical” 24 hour period when it comes to deciding when to create your System Monitor log. For example, pick a typical business day, not a weekend or holiday.

Once you have captured 24 hours of System Monitor data in a log, display the recommended counters in the Graph mode of System Monitor, and then record the average, minimum, and maximum values in the table above. Once you have done this, then compare your results with the analysis below. By comparing your results with the recommendations below, you should be able to quickly identify any potential hardware bottlenecks your SQL Server is experiencing.

How to Interpret Key System Monitor Counters
Below is a discussions of the various key System Monitor counters, their recommended values, and some options for helping to identify and resolve the hardware bottlenecks. Note that I have limited the number of System Monitor counters to watch. I have done so because our goal in this article is to find the easy and obvious performance problems. Many other System Monitor counters can be found discussed elsewhere on this website.

Memory: Pages/sec
This counter measures the number of pages per second that are paged out of RAM to disk, or paged into RAM from disk. The more paging that occurs, the more I/O overhead your server experiences, which in turn can decrease the performance of SQL Server. Your goal is to try to keep paging to a minimum, not to eliminate it.

Assuming that SQL Server is the only major application running on your server, then this figure should ideally average between zero and 20. You will most likely see spikes much greater than 20, which is normal. They key here is keeping the average pages per second less than 20.

If your server is averaging more than 20 pages per second, one of the more likely causes of this is a memory bottleneck due to a lack of needed RAM. Generally speaking, the more RAM a server has, the less paging it has to perform.

In most cases, on a physical server dedicated to SQL Server with an adequate amount of RAM, paging will average less than 20. An adequate amount of RAM for SQL Server is a server that has a Buffer Hit Cache Ratio (described in more detail later) of 99% and higher. If you have a SQL Server that has a Buffer Hit Cache Ratio of 99% or higher for a period of 24 hours, but you are getting an average paging level of over 20 during this same time period, this may indicate that you are running other applications on the physical server other than SQL Server. If this is the case, you should ideally remove those applications, allowing SQL Server to be the only major application on the physical server.

If your SQL Server is not running any other applications, and paging exceeds 20 on average for a 24-hour period, this may mean that you have changed the SQL Server memory settings. SQL Server should generally be configured so that it is set to the “Dynamically configure SQL Server memory” option, and the “Maximum Memory” setting should be set to the highest level. For optimum performance, SQL Server should be allowed to take as much RAM as it wants for its own use without having to compete for RAM with other applications. The above recommendation assumes that you are not using AWE memory. If you use AWE memory, then you will have to manually set the amount of memory used by SQL Server, as described elsewhere on this website.
Memory: Available Bytes
Another way to check to see if your SQL Server has enough physical RAM is to check the Memory Object: Available Bytes counter. This value should be greater than 5MB. If not, then your SQL Server needs more physical RAM. On a server dedicated to SQL Server, SQL Server attempts to maintain from 5-10MB of free physical memory. The remaining physical RAM is used by the operating system and SQL Server. When the amount of available bytes is near 5MB, or lower, most likely SQL Server is experiencing a performance hit due to lack of memory. When this happens, you either need to increase the amount of physical RAM in the server, reduce the load on the server, or change your SQL Server’s memory configuration settings appropriately.

Physical Disk: % Disk Time
This counter measures how busy a physical array is (not a logical partition or individual disks in an array). It provides a good relative measure of how busy your arrays are.
As a rule of thumb, the % Disk Time counter should run less than 55%. If this counter exceeds 55% for continuous periods (over 10 minutes or so during your 24 hour monitoring period), then your SQL Server may be experiencing an I/O bottleneck. If you see this behavior only occasionally in your 24 hour monitoring period, I wouldn’t worry too much, but if it happens often (say, several times an hour), then I would start looking into finding ways to increase the I/O performance on the server, or to reduce the load on the server. Some ways to boost disk I/O include adding drives to an array (if you can), getting faster drives, adding cache memory to the controller card (if you can), using a different version of RAID, or getting a faster controller.

If you are running your SQL Server databases on a SAN, you the values you get for disk counters may not be accurate. Use the software provided by the SAN vendor to collect this data.
Physical Disk: Avg. Disk Queue Length
Besides watching the Physical Disk: % Disk Time counter, you will also want to watch the Avg. Disk Queue Length counter as well. If it exceeds 2 for continuous periods (over 10 minutes or so during your 24 hour monitoring period) for each disk drive in an array, then you may have an I/O bottleneck for that array. Like the Physical Disk: % Disk Time counter, if this happens occasionally in your 24 hour monitoring period, I wouldn’t worry too much, but if it happens often, then I would start looking into finding ways to increase the I/O performance on the server, as described previously.

You will need to calculate this figure because System Monitor does not know how many physical drives are in your array. For example, if you have an array of 6 physical disks, and the Avg. Disk Queue Length is 10 for a particular array, then the actual Avg. Disk Queue Length for each drive is 1.66 (10/6=1.66), which is well within the recommended 2 per physical disk.

Use both the % Disk Time and the Avg. Disk Queue Length counters together to help you decide if your server is experiencing an I/O bottleneck. For example, if you see many time periods where the % Disk Time is over 55% and when the Avg. Disk Queue Length counter is over 2 per physical disk, you can be confident the server is having a I/O bottleneck.

Processor: % Processor Time
The Processor Object: % Processor Time counter, is available for each CPU (instance), and measures the utilization of each individual CPU. This same counter is also available for all of the CPUs (total). This is the key counter to watch for CPU utilization. If the % Total Processor Time (total) counter exceeds 80% for continuous periods (over 10 minutes or so during your 24 hour monitoring period), then you may have a CPU bottleneck. If these busy periods are only occur occasionally, and you think you can live with them, that’s OK. But if they occur often, you may want to consider reducing the load on the server, getting faster CPUs, getting more CPUs, or getting CPUs that have a larger on-board L2 cache. 
System: Processor Queue Length
Along with the Processor: % Processor Time counter, you will also want to monitor the Processor Queue Length counter. If it exceeds 2 per CPU for continuous periods (over 10 minutes or so during your 24 hour monitoring period), then you probably have a CPU bottleneck. For example, if you have 4 CPUs in your server, the Processor Queue Length should not exceed a total of 8 for the entire server.

If the Processor Queue Length regularly exceeds the recommended maximum, but the CPU utilization is not correspondingly as high (which is typical), then consider reducing the SQL Server “max worker threads” configuration setting. It is possible the reason that the Processor Queue Length is high is because there are an excess number of worker threads waiting to take their turn. By reducing the number of “maximum worker threads,” what you are doing is forcing thread pooling to kick in or to take greater advantage of thread pooling.

Use both the Processor Queue Length and the % Total Process Time counters together to determine if you have a CPU bottleneck. If both indicators are exceeding their recommended amounts during the same continuous time periods, you can be assured there is a CPU bottleneck.

SQL Server Buffer: Buffer Cache Hit Ratio
This SQL Server Buffer: Buffer Cache Hit Ratio counter indicates how often SQL Server goes to the buffer, not the hard disk, to get data. In OLTP applications, this ratio should exceed 90%, and ideally be over 99%. If your buffer cache hit ratio is lower than 90%, you need to go out and buy more RAM today. If the ratio is between 90% and 99%, then you should seriously consider purchasing more RAM, as the closer you get to 99%, the faster your SQL Server will perform. One thing to keep in mind about the SQL Server Buffer: Buffer Cache Hit Ratio counter is that it is an average from the last time the SQL Server instance was restarted. Because of this, you may want to use the SQL Server Buffer: Buffer Cache Hit Ratio counter value found by using the Performance Dashboard, which is described in a later article. This counter is based on current activity, not average activity.

In OLAP applications, the ratio can be much less because of the nature of how OLAP works. In any case, more RAM should increase the performance of SQL Server.

SQL Server General: User Connections
Since the number of users using SQL Server affects its performance, you may want to keep an eye on the SQL Server General Statistics Object: User Connections counter. This shows the number of user connections, not the number of users, that currently are connected to SQL Server.

If this counter exceeds 255, then you may want to boost the SQL Server configuration setting, “Maximum Worker Threads” to a figure higher than the default setting of 255. If the number of connections exceeds the number of available worker threads, then SQL Server will begin to share worker threads, which can hurt performance. The setting for “Maximum Worker Threads” should be higher than the maximum number of user connections your server ever reaches.

Where to Go From Here
While there are a lot more counters than the ones you find on this page, these cover the key counters that you need to monitor during your Performance Audit. Once you have completed your System Monitor analysis, use the recommendations presented here, and later in this article series, to make the necessary changes to get your SQL Server performing as it should.

Do not add index to columns it’s less than 95% uniqunes
If a column in a table is not at least 95% unique, then most likely the query optimizer will not use a non-clustered index based on that column.
Because of this, don’t add non-clustered indexes to columns that aren’t at least 95% unique. Be sure to test if you recives any performance gains by adding an index to that column. 

Non-clustered indexes are best for queries:  That return few rows (including just one row) and where the index has good selectivity (generally above 95%).

That retrieve small ranges of data (not large ranges). Clustered indexes perform better for large range queries. Where both the WHERE clause and the ORDER BY clause are both specified for the same column in a query. This way, the non-clustered index pulls double duty.
It helps to speed up accessing the records, and it also speeds up the sorting of the records (because the returned data is already sorted).

That use JOINs (although clustered indexes are better).

When the column or columns to be indexed are very wide. While wide indexes are never a good thing, but if you have no choice, a non-clustered index will have overall less overhead than clustered index on a wide index.