SQL Server is a very large subject in itself. Rather than attempting to cover all aspects of SQL Server database access, this article focuses on those areas where you are likely to gain the biggest payoffs.
Introduction
In this article by Matt Perdeck, author of ASP.NET Site Performance Secrets we will take a look at the following:
- Locking
- Execution plan reuse
- Fragmentation
- Memory
- Disk
- CPU
Locking
In this section, you'll see how to determine which queries are
involved in excessive locking delays, and how to prevent those delays
from happening.
Gathering detailed locking information
You can find out which queries are involved in excessive locking
delays by tracing the event "Blocked process report" in SQL Server
Profiler.
This event fires when the lock wait time for a query exceeds the
"blocked process threshold". To set this threshold to, for example, 30
seconds, run the following lines in a query window in SSMS (locking.sql in the downloaded code bundle):
DBCC TRACEON(1222,-1)
Then, start the trace in Profiler:
- Start SQL Profiler. Click on Start | Programs | Microsoft SQL Server 2008 | Performance Tools | SQL Server Profiler.
- In SQL Profiler, click on File | New Trace.
- Click on the Events Selection tab.
- Select Show all events checkbox to see all events. Also select Show all columns to see all the data columns.
- In the main window, expand Errors and Warnings and select the Blocked process report event. Make sure the checkbox in the TextData column is checked—scroll horizontally if needed to find it.
- If you need to investigate deadlocks, also expand Locks and select the Deadlock graph
event. To get additional information about deadlocks, have SQL Server
write information about each deadlock event to its error log, by
executing the following from an SSMS query window:
ALTER DATABASE mydatabase SET SINGLE_USER WITH ROLLBACK
IMMEDIATE;
ALTER DATABASE mydatabase SET READ_COMMITTED_SNAPSHOT ON;
ALTER DATABASE mydatabase SET MULTI_USER;
- Uncheck all the other events, unless you are interested in them.
- Click on Run to start the trace.
- Save the template, so that you don't have to recreate it the next time. Click on File | Save As | Trace Template. Fill in a descriptive name and click on OK. Next time you create a new trace by clicking on File | New Trace, you can retrieve the template from the Use the template drop-down.
- Once you have captured a representative sample, click File | Save to save the trace to a trace file for later analysis. You can load a trace file by clicking on File | Open.

When you click a Blocked process report event in Profiler,
you'll find information about the event in the lower pane, including the
blocking query and the blocked query. You can get details about
Deadlock graph events the same way.
To check the SQL Server error log for deadlock events:
- In SSMS expand the database server, expand Management and expand SQL Server Logs. Then double-click on a log.

- In the Log File Viewer, click on Search near the top of the
window and search for "deadlock-list". In the lines that chronologically
come after the deadlock-list event, you'll find much more information
about the queries involved in the deadlock.
Reducing blocking
Now that you identified the queries involved in locking delays, it's
time to reduce those delays. The most effective way to do this is to
reduce the length of time locks are held as follows:
- Optimize queries. The lesser time your queries take, the lesser time they hold locks.
- Use stored procedures rather than ad hoc queries. This reduces time
spent compiling execution plans and time spent sending individual
queries over the network.
- If you really have to use cursors, commit updates frequently. Cursor processing is much slower than set-based processing.
- Do not process lengthy operations while locks are held, such as
sending e-mails. Do not wait for user input while keeping a transaction
open. Instead, use optimistic locking, as described in:
- Optimistic Locking in SQL Server using the ROWVERSION Data Type http://www.mssqltips.com/tip.asp?tip=1501
A second way to reduce lock wait times is to reduce the number of resources being locked:
- Do not put a clustered index on frequently updated columns. This
requires a lock on both the clustered index and all non-clustered
indexes, because their row locator contains the value you are updating.
- Consider including a column in a non-clustered index. This would
prevent a query from having to read the table record, so it won't block
another query that needs to update an unrelated column in the same
record.
- Consider row versioning. This SQL Server feature prevents queries
that read a table row from blocking queries that update the same row and
vice versa. Queries that need to update the same row still block each
other.
Read versioning works by storing rows in a temporary area (in tempdb)
before they are updated, so that reading queries can access the stored
version while the update is taking place. This does create an overhead
in maintaining the row versions—test this solution before taking it
live. Also, in case you set the isolation level of transactions, row
versioning only works with the Read Committed isolation mode, which is
the default isolation mode.
To implement row versioning, set the READ_COMMITTED_SNAPSHOT option as shown in the following code (locking.sql
in the downloaded code bundle). When doing this, you can have only one
connection open—the one used to set the option. You can make that happen
by switching the database to single user mode; warn your users first.
Be careful when applying this to a production database, because your
website won't be able to connect to the database while you are carrying
out this operation.
select is_read_committed_snapshot_on
from sys.databases
where name='mydatabase'
To check whether row versioning is in use for a database, run:
SET LOCK_TIMEOUT 5000
Finally, you can set a lock timeout.
For example, to abort statements that have been waiting for over five
seconds (or 5000 milliseconds), issue the following command:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRAN
SELECT Title FROM dbo.Book
UPDATE dbo.Book SET Author='Charles Dickens'
WHERE Title='Oliver Twist'
COMMIT
Use 1 to wait indefinitely. Use 0 to not wait at all.
Reducing deadlocks
Deadlock is a situation where two transactions are waiting for each
other to release a lock. In a typical case, transaction 1 has a lock on
resource A and is trying to get a lock on resource B, while transaction 2
has a lock on resource B and is trying to get a lock on resource A.
Neither transaction can now move forward, as shown below:

One way to reduce deadlocks is to reduce lock delays in general, as
shown in the last section. That reduces the time window in which
deadlocks can occur.
A second way is suggested by the diagram—always lock resources in the
same order. If, as shown in the diagram, you get transaction 2 to lock
the resources in the same order as transaction 1 (first A, then B), then
transaction 2 won't lock resource B before it starts waiting for
resource A. Hence, it doesn't block transaction 1.
Finally, watch out for deadlocks caused by the use of HOLDLOCK or
Repeatable Read or Serializable Read isolation levels. Take for example
the following code:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRAN
SELECT Title FROM dbo.Book WITH(UPDLOCK)
UPDATE dbo.Book SET Author='Charles Dickens'
WHERE Title='Oliver Twist'
COMMIT
Imagine two transactions running this code at the same time. Both acquire a Select lock on the rows in the Book table when they execute the SELECT.
They hold onto the lock because of the Repeatable Read isolation level.
Now, both try to acquire an Update lock on a row in the Book table to execute the UPDATE. Each transaction is now blocked by the Select lock the other transaction is still holding.
To prevent this from happening, use the UPDLOCK hint on the SELECT statement. This causes the SELECT to acquire an Update lock, so that only one transaction can execute the SELECT. The transaction that did get the lock can then execute its UPDATE and free the locks, after which the other transaction comes through. The code is as follows:
SELECT b.Title, a.AuthorName
FROM dbo.Book b JOIN dbo.Author a ON b.LeadAuthorId=a.Authorid
WHERE BookId=5
Execution plan reuse
You can boost execution plan reuse in your site by making it easier
for SQL Server to work out which bits of a query's execution plan can be
reused by a similar query.
Ad hoc queries
Take this simple ad hoc query:
SELECT b.Title, a.AuthorName
FROM dbo.Book b JOIN dbo.Author a ON b.LeadAuthorId=a.Authorid
WHERE BookId=9 -- Doesn't match query above, uses 9 instead of 5
When
SQL Server receives this query for the very first time, it will compile
an execution plan, store the plan in the plan cache, and execute the
plan.
If SQL Server then receives this query again, it will reuse the execution plan if it is still in the plan cache, provided that:
- All object references in the query are qualified with at least the schema name. Use dbo.Book instead of Book. Adding the database would be even better.
- There is an exact match between the texts of the queries. This is
casesensitive, and any white space differences also prevent an exact
match.
As a result of the second rule, if you use the same query as above but with a different BookId, there will be no match:
SELECT Title, Author FROM dbo.Book WHERE BookId=5
Obviously, this is not a recipe for great execution plan reuse.
Simple parameterization
To make it easier for ad hoc queries to reuse a cached plan, SQL
Server supports simple parameterization. This automatically figures out
the variable bit of a query. Because this is difficult to get right and
easy to get wrong, SQL Server attempts this only with very simple
queries with one table, for example:
SELECT Title, Author FROM dbo.Book WHERE BookId=9
It can reuse the execution plan generated for:
sp_executesql @query, @parameter_definitions, @parameter1,
@parameter2, ...
sp_executesql
Instead of getting SQL Server to guess which bits of a query can be
turned into parameters, you can use the system-stored procedure sp_executesql to simply tell it yourself. Calling sp_executesql takes the following form:
EXEC sp_executesql
N'SELECT b.Title, a.AuthorName
FROM dbo.Book b JOIN dbo.Author a ON b.LeadAuthorId=a.Authorid
WHERE BookId=@BookId',
N'@BookId int',
@BookId=5
For example:
CREATE PROCEDURE GetBook
@BookId int
AS
BEGIN
SET NOCOUNT ON;
SELECT Title, Author FROM dbo.Book WHERE BookId=@BookId
END
GO
Note that sp_executesql expects nvarchar values for its first two parameters, so you need to prefix the strings with N.
Stored procedures
Instead of sending individual queries to the database, you can
package them in a stored procedure that is permanently stored in the
database. That gives you the following advantages:
- Just as with sp_executesql, stored procedures allow you to define parameters explicity to make it easier for SQL Server to reuse execution plans.
- Stored procedures can contain a series of queries and T-SQL control statements such as IF...THEN.
This allows you to simply send the stored procedure name and parameters
to the database server, instead of sending individual queries, saving
networking overhead.
- Stored procedures make it easier to isolate database details from
your website code. When a table definition changes, you may only need to
update one or more stored procedures, without touching the website.
- You can implement better security, by only allowing access to the
database via stored procedures. That way, you can allow the users to
access the information they need through stored procedures, while
preventing them from taking unplanned actions.
To create a stored procedure in SQL Server Management Studio, expand
your database, expand Programmability, and then expand Stored
Procedures. Right-click on Stored Procedures and choose New Stored
Procedure. A new query window opens where you can define your new stored
procedure:

A stored procedure to execute the query you saw in the previous section would look like this (storedprocedure.sql in the downloaded code bundle):
EXEC dbo.GetBook @BookId=5
This creates a stored procedure with name GetBook, and a parameter list with one parameter @BookId
of type int. When SQL Server executes the stored procedure, occurrences
of that parameter in the body of the stored procedure get replaced by
the parameter value that you pass in.
Setting NOCOUNT to ON improves performance by preventing SQL Server from sending a message with the number of rows affected by the stored procedure.
To add the stored procedure to the database, press F5 to execute the CREATE PROCEDURE statement.
To verify that the stored procedure has been created, right-click on Stored Procedures and choose Refresh. Your new stored procedure should now appear in the list of stored procedures. To modify the stored procedure, right-click on Stored Procedures and choose Modify.
To execute the stored procedure in a query window, use the following:
EXEC dbo.GetBook 5
You could also simply use the following:
string connectionString = "...";
using (SqlConnection connection =
new SqlConnection(connectionString))
{
string sql = "dbo.GetBook";
using (SqlCommand cmd = new SqlCommand(sql, connection))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(new SqlParameter("@BookId", bookId));
connection.Open();
// Execute database command ...
}
}
Using a stored procedure from your C# code is similar to using an ad hoc query, as shown below (working sample in folder AccessStoredProcedure in the downloaded code bundle):
SELECT SupplierName FROM dbo.Supplier WHERE City=@City
Make sure that the command text has the name of the stored procedure, instead of the text of a query. Set the CommandType property of the SqlCommand object to CommandType.StoredProcedure,
so that SQL Server knows you're calling a stored procedure. Finally,
add parameters to the command that match the parameters you used when
you created the stored procedure.
To find out more about stored procedures, visit the site at http://msdn.microsoft.com/en-us/library/ms190669.aspx.
Now that you've seen how to improve reuse of execution plans, let's
see how to prevent plan reuse, and why you would want to do that.
Preventing reuse
You may not always want to reuse an execution plan. When the
execution plan of a stored procedure is compiled, that plan is based on
the parameters used at the time. When the plan is reused with different
parameters, the plan generated for the first set of parameters is now
reused with the second set of parameters. However, this is not always
desirable.
Take for example the following query:
CREATE PROCEDURE dbo.GetSupplierByCity
@City nvarchar(100)
WITH RECOMPILE
AS
BEGIN
...
END
Assume
that the Supplier table has an index on City. Now assume half the
records in Supplier have City "New York". The optimal execution plan for
"New York" will then be to use a table scan, rather incurring the
overhead of going through the index. If however "San Diego" has only a
few records, the optimal plan for "San Diego" would be to use the index.
A good plan for one parameter value may be a bad plan for another
parameter value. If the cost of using a sub-optimal query plan is high
compared to the cost of recompiling the query, you would be better off
to tell SQL Server to generate a new plan for each execution.
When creating a stored procedure, you can tell SQL Server not to cache its execution plan with the WITH RECOMPILE option:
EXEC dbo.GetSupplierByCity 'New York' WITH RECOMPILE
Also, you can have a new plan generated for a specific execution:
EXEC sp_recompile 'dbo.Book'
Finally you can cause a stored procedure to be recompiled the next time it is called with the system stored procedure sp_recompile:
Fragmentation
SQL Server provides two options to defragment tables and indexes,
rebuild and reorganize. In this section, we'll examine their advantages
and disadvantages.
Index rebuild
Rebuilding an index is the most effective way to defragment an index or table. To do a rebuild, use the following command:
ALTER INDEX myindex ON mytable REBUILD
This rebuilds the index physically using fresh pages to reduce fragmentation to a minimum.
If you rebuild a clustered index, that has the effect of rebuilding
the underlying table because the table is effectively part of the
clustered index.
To rebuild all indexes on a table, use the following command:
ALTER INDEX ALL ON mytable REBUILD
Index
rebuilding has the disadvantage that it blocks all queries trying to
access the table and its indexes. It can also be blocked by queries that
already have access. You can reduce this with the ONLINE option:
ALTER INDEX myindex ON mytable REBUILD WITH (ONLINE=ON)
However, this will cause the rebuild to take longer.
Another issue is that rebuilding is an atomic operation. If it is
stopped before completion, all defragmentation work done so far is lost.
Index reorganize
Unlike index rebuilding, index reorganizing doesn't block the table
and its indexes and if it is stopped before completion, the work done so
far isn't lost. However, this comes at the price of reduced
effectiveness. If an index is between 20 percent and 40 percent
fragmented, reorganizing the index should suffice.
To reorganize an index, use the command:
ALTER INDEX myindex ON mytable REORGANIZE
Use the LOB_COMPACTION option to consolidate columns with Large Object (LOB) data, such as image, text, ntext, varchar(max), nvarchar(max), varbinary(max) and xml:
ALTER INDEX myindex ON mytable REORGANIZE WITH (LOB_COMPACTION=ON)
Index
reorganizing is much more geared towards being performed in a busy
system than index rebuilding. It is non-atomic, and so if it fails, not
all defragmentation work is lost. It requests a small numbers of locks
for short periods while it executes, rather than blocking entire tables
and their indexes. If it finds that a page is being used, it simply
skips that page without trying again.
The disadvantage of index reorganization is that it is less
effective, because of the skipped pages, and because it won't create new
pages to arrive at a better physical organization of the table or
index.
Heap table defragmentation
A heap table is a table without a clustered index. Because it doesn't have a clustered index, it cannot be defragmented with ALTER INDEX REBUILD or ALTER INDEX REORGANIZE.
Fragmentation in heap tables tends to be less of a problem, because
records in the table are not ordered. When inserting a record, SQL
Server checks whether there is space within the table, and if so,
inserts the record there. If you always insert records, and not update
or delete records, all records are written at the end of the table. If
you update or delete records, you may still wind up with gaps in the
heap table.
Since heap table defragmentation is not normally an issue, it is not discussed in this article. Here are a few options though:
- Create a clustered index and then drop it
- Insert data from the heap table into a new table
- Export the data, truncate the table, and import the data back into the table
Memory
These are the most common ways to relieve memory stress:
- Add more physical memory.
- Increase the amount of memory allocated to SQL Server. To see how much is currently allocated, run:
EXEC sp_configure 'show advanced option', '1'
RECONFIGURE
EXEC sp_configure 'max server memory (MB)'
If
more physical memory is available on the server, increase the
allocation. For example, to increase the allocation to 3000 MB, run:
EXEC sp_configure 'show advanced option', '1'
RECONFIGURE
EXEC sp_configure 'max server memory (MB)', 3000
RECONFIGURE WITH OVERRIDE
Do not allocate all physical memory. Leave a few hundred MB free for the operating system and other software.
- Reduce the amount of data read from disk. Each page read from disk
needs to be stored and processed in memory. Table scans, aggregate
queries, and joins can read large amounts of data.
- Promote reuse of execution plans, to reduce memory needed for the plan cache.
Disk usage
Here are the most common methods to reduce stress on the disk system:
- Optimizing query processing
- Moving the logfile to a dedicated physical disk
- Reducing fragmentation of the NTFS file system
- Moving the tempdb database to its own disk
- Split the data over two or more disks to spread the load
- Alternatively, move heavily used database objects to another disk
- Use the optimal RAID configuration
Let's go through these options one-by-one.
Optimizing query processing
Make sure you have the correct indexes in place and optimize the most expensive queries.
Moving the logfile to a dedicated physical disk
Moving the read/write head of a disk is a relatively slow process.
The logfile is written sequentially, which by itself requires little
head movement. This doesn't help you though if the logfile and data file
are on the same disk, because then the head has to move between logfile
and data file.
However, if you put the logfile on its own disk, head movement on
that disk is minimized, leading to faster access to the log file. That
in turn leads to quicker modification operations, such as UPDATEs, INSERTs, and DELETEs.
To move the logfile to another disk for an existing database, first
detach the database. Move the logfile to the dedicated disk. Then
reattach the database, specifying the new location of the logfile.
Reduce fragmentation of the NTFS filesystem
When the actual NTFS database files become fragmented, the disk head
has to hunt around the disk for the fragments when reading a file. To
reduce fragmentation, set a large initial file size (for your database
and logfiles) and a large increment size. Better still, set it large
enough so that neither file ever has to grow; the objective is to
prevent growing and shrinking the files.
If you do need to grow and shrink the database or log files, consider
using a 64KB NTFS cluster size to match SQL Server reading patterns.
Consider moving the tempdb database to its own disk
tempdb is used for sorting, sub-queries, temporary tables,
aggregation, cursors, and so on. It can be very busy. That means that it
may be a good idea to move the tempdb database to its own disk, or to a disk that is less busy.
To check the level of activity of the database and logfiles of tempdb and the other databases on the server, use the dm_io_virtual_file_stats DMV (diskusage.sql in the downloaded code bundle):
SELECT d.name, mf.physical_name, mf.type_desc, vfs.*
FROM sys.dm_io_virtual_file_stats(NULL,NULL) vfs
JOIN sys.databases d ON vfs.database_id = d.database_id
JOIN sys.master_files mf ON mf.database_id=vfs.database_id AND
mf.file_id=vfs.file_id
To move the tempdb data and logfiles to for example the G: disk, setting their sizes to 10 MB and 1 MB, run this code (diskusage.sql in downloaded code bundle). Then restart the server:
ALTER DATABASE tempdb MODIFY FILE (NAME = tempdev, FILENAME = 'G:\
tempdb.mdf', SIZE = 10MB)
GO
ALTER DATABASE tempdb MODIFY FILE (NAME = templog, FILENAME = 'G:\
templog.ldf', SIZE = 1MB)
GO
To reduce fragmentation, prevent growing and shrinking of the tempdb data and logfiles by giving them as much space as they are likely to ever need.
Split the database data over two or more disks
By splitting the database's data file over two or more disks, you
spread the load. And because you wind up with more but smaller files,
this also makes backup and moving the database easier.
To make this happen, add a file to the PRIMARY filegroup of
the database. SQL Server then spreads the data over the existing file(s)
and the new file. Put the new file on a new disk or a disk that isn't
heavily used. If you can, make its initial size big enough so it doesn't
have to grow further, thereby reducing fragmentation.
For example, to add a file to database TuneUp on the G: disk with an initial size of 20 GB, run the following command (diskusage.sql in the downloaded code bundle):
ALTER DATABASE TuneUp
ADD FILE (NAME = TuneUp_2, FILENAME = N'G:\TuneUp_2.ndf', SIZE = 20GB)
Note that the file has extension .ndf, the recommended extension for secondary files.
Move heavily used database objects to another disk
You could move heavily used database objects such as indexes, to a
new disk, or to less busy disks. It is used to find unused indexes, but
you can also use it to find the busiest indexes.
To move an index to another disk, first create a new user-defined file group. For example, this statement creates a file group FG2 (diskusage.sql in downloaded code bundle):
ALTER DATABASE TuneUp ADD FILEGROUP FG2
Then add a file to the file group:
ALTER DATABASE TuneUp
ADD FILE (NAME = TuneUp_Fg2, FILENAME = N'G:\TuneUp_Fg2.ndf', SIZE =
200MB)
TO FILEGROUP FG2
Finally move the object to the file group. For example, here is how to move a nonclustered index IX_Title on column Title in table Book to file group FG2:
CREATE NONCLUSTERED INDEX [IX_Title] ON [dbo].[Book]([Title] ASC)
WITH DROP_EXISTING ON FG2
You
can assign multiple objects to a file group. And you can add multiple
files to a file group, allowing you to spread for example a very busy
table or index over multiple disks.
Have tables and their non-clustered indexes on separate disks, so one
task can read the index itself and another task can do key lookups in
the table.
Use the optimal RAID configuration
To improve performance and/or fault tolerance, many database servers
use Redundant Array of Inexpensive Disks (RAID) subsystems instead of
individual drives. RAID subsystems come in different configurations.
Choosing the right configuration for your data files, logfiles, and tempdb files can greatly affect performance.
The most commonly used RAID configurations are:
RAID Configuration |
Description |
RAID 0 |
Each file is spread ("striped") over each disk in the array. When
reading or writing a file, all disks are accessed in parallel, leading
to high transfer rates. |
RAID 5 |
Each file is striped over all disks. Parity information for each
disk is stored on the other disks, providing fault tolerance. File
writes are slow-a single file write requires 1 data read + 1 parity read
+ 1 data write + 1 parity write = 4 accesses. |
RAID 10 |
Each file is striped over half the disks. Those disks are mirrored
by the other half, providing excellent fault tolerance. A file write
requires 1 data write to a main disk + 1 data write to a mirror disk. |
RAID 1 |
This is RAID 10 but with just 2 disks, a main disk and a mirror disk. That gives you fault tolerance but no striping. |
This translates to the following performance characteristics compared
with an individual disk. N is the number of disks in the array.
|
Read Speed |
Write Speed |
Fault Tolerant |
Individual Disk |
1 |
1 |
no |
RAID 0 |
N |
N |
no |
RAID 5 |
N |
N/4 |
yes |
RAID 10 |
N |
N/2 |
yes |
RAID 1 |
2 |
1 |
yes |
So, if you have a RAID 10 with 4 disks (2 main + 2 mirror) and N = 4,
read performance will be four times better than an individual disk,
while write performance will be 4/2 = 2 times better. This is assuming
that the individual disk has the same speed as the disks in the RAID 10.
From this follows the optimal RAID configuration to use for your tempdb, data, and logfiles:
Files |
Performance related attributes |
Recommended RAID configuration |
tempdb |
Requires good read and write performance for random access. Relatively small. Losing temporary data may be acceptable. |
RAID 0, RAID 1, RAID 10 |
log |
Requires very good write performance, and fault tolerance. Uses sequential access, so striping is no benefit. |
RAID 1, RAID 10 |
data (writes make up less than 10 percent of accesses) |
Requires fault tolerance. Random access means striping is beneficial. Large data volume. |
RAID 5, RAID 10 |
data (writes make up over 10 percent of accesses) |
Same as above, plus good write performance. |
RAID 10 |
Having a battery-backed caching RAID controller greatly improves
write performance, because this allows SQL Server to hand over write
requests to the cache without having to wait for the physical disk
access to complete. The controller then executes the cached write
requests in the background.
CPU
Common ways to resolve processor bottlenecks include:
- Optimize CPU-intensive queries. The DMVs listed give you the CPU usage of each query.
- Building execution plans is highly CPU-intensive.
- Install more or faster processors, L2/L3 cache, or more efficient drivers.
Conclusion
In this article, we looked at improving the performance of your SQL
Server database. We learned how to gather detailed information about
locking-related bottlenecks, and how to reduce blocking and deadlocks.
We also discussed how to improve execution plan reuse dramatically, through the use of stored procedures and sp_executesql.
Also, we saw the two types of index defragmentation: index rebuilding
and index reorganizing. Finally, ways to make more optimal use of the
available disks were detailed, including moving the logfile to its own
separate disk. This was followed by significant ways to reduce memory
usage and CPU usage on the database server. The book is out now and is available with Packt. For more details, please visit www.packtpub.com/asp-net-site-performance-secrets/book