how do you improve the performance of a stored procedure? [Resolved]

Posted by Chamu under Sql Server on 11/27/2015 | Points: 10 | Views : 892 | Status : [Member] | Replies : 5
How do you improve. performance. of a stored procedure?


Posted by: Sheonarayan on: 11/27/2015 [Administrator] HonoraryPlatinum | Points: 50


There is not sure shot answer of this as stored procedure is not an entity on its own, it contains SQL statement that runs when you call the stored procedure. So if you improve the SQL Statements performance, the stored procedure performance increases automatically.

Apart from that below are few points that also help in increasing stored procedure performance.

1. Keep SET NOCOUNT ON to stop the rows affected message

2. Use full qualified name to call the stored procedure, it should be like servername.databasename.ownername.storedprocedurename

3. In case to return value, return integer value rather than integer recordset result

4. Do not use stored procedure name prefixed with "sp_" as

5. Do not create a very big stored procedure instead create many small small stored procedures

6. Avoid using temporary tables inside stored procedures

7. Use SQL Server Profiler (SQL Server Management studio > Tools > SQL Server Profiler) to optimize stored procedure.

In case you are looking for how to create different types of stored procedure, read this

Hope this helps.


Sheo Narayan

Chamu, if this helps please login to Mark As Answer. | Alert Moderator

Posted by: Rajnilari2015 on: 11/27/2015 [Member] [Microsoft_MVP] [MVP] Platinum | Points: 50


@Chamu, as Mr.Sheonarayan said "There is not sure shot answer of this as stored procedure is not an entity on its own, it contains SQL statement that runs when you call the stored procedure". Which is completely true. However, you can follow the below steps to improve the performance of your query in addition to was what Mr.Sheonarayan has already mentioned. -

a) Never name the user defined store procedures with "sp_". They are system defined and mostly resides under the master db. So if we write a user defined stored procedure by the name "sp_" the query engine will first search the sp indside the master db and if not found then it will search in the current session db. This brings unnecessary round trip. Better to use some other naming convention as "usp_".

b) Avoid * in SELECT statement since SQL Server converts the * to columns name before query execution. One more thing, instead of querying all columns by using * in select statement, give the name of columns which you required.

c) Use EXISTS instead of IN clause since EXISTS is faster than IN Clause

d) Avoid using user defined functions. Instead use Apply (Outer/Cross) clauses

e) Try to maintain as small clustered index as much as possible since the fields used in clustered index may also used in nonclustered index.Data in the database is also stored in the order of clustered index.So, maintaining a huge clustered index on a table with a large number of rows increases the size drastically. You may be interested in reading this article "Effective Clustered Indexes" ( )

f) Avoid cursor since they are very slow in performance. Instead try to write SET based TSQL queries and instead of a Row by Row approach, the Set based approach submits the batch at a time to the query engine there by boosts the query performance.

g) Try to use less number of Temp tables.Instead use Common table expressions.Though it depends.

h) Never write a recursive Stored procedure. Instead use a Recursive Common Table Expression.

i) Try to avoid writing co-related subquery since for every outer row, the inner row gets evaluated.

j) Use TRY-CATCH for handling errors in T-SQL statements.Or if you are using SQL Server Version 2012 onwards, you can use the Throw statement, which is even better.

k) If possible perform a filter by using a where clause and then perform JOIN and not the reverse.

m) Instead of writing Select col1,col2 from TableName write Select col1,col2 from TableName with NOLOCK as the later solves Share lock issue. However, please don't use it for DML statements like (INSERT/UPDATE/DELETE/MERGE).

n) Use UNION ALL in place of UNION. UNION sort the result set for distinguished values but not UNION ALL hence it is faster.

o) Use schema naming convention as it helps the SQL Server for finding that object in a specific schema.

p) Choose appropriate FILL factors

q) If you need to perform a huge join, avaoid it. Create views and apply the Projection on that.

r) Use cluetered index while designing the DB design and that won't yield HEAP tables.

s) Try to avoid NCHAR/NVARCHAR and use CHAR/VARCHAR as the former will unnecessary cosumes 2 bytes more as compared to the later since the former is meant for unicode characters storage.

t) Avoid using VARCHAR(MAX) because we cannot apply indexing on that.

u) Try to keep small transactions since it locks the processing tables and may sometime also cause deadlocks.

v) Avoid using Loop(While). The reason is mentioned in point (f)

w) Instead of creating several non-clustered index, try using Covering Index because it can satisfy all requested columns in a query without performing a further lookup into the clustered index.If all the columns requested in the select list of query, are available in the index, then the query engine doesn't have to lookup the table again which can significantly increase the performance of the query. Since all the requested columns are available with in the index, the index is covering the query. It enhances the query performance by avoiding unnecessary lookups. (Read More: )

x) Avaoid distinct clause for limiting rows.Better use the Ranking functions like ROW_NUMBER ( ).

y) For pagination, you can use Fetch/Offset ( instead of ROW_NUMBER. The Fetch/Offset is a better choice as mentioned in the article link provided.

z) Use judiciously Force Index Query Hints ( )

These are only small tips for writing better queries and not the sure short guide but yes, this way of writing the Queries/SP's will definitely improve the query performance. I will make another post to highlight some more effective points.

Thanks & Regards,
RNA Team

Chamu, if this helps please login to Mark As Answer. | Alert Moderator

Posted by: Rajnilari2015 on: 11/28/2015 [Member] [Microsoft_MVP] [MVP] Platinum | Points: 50


This is in continuation to my previous post.I am adding some more tips to write effective query and how to optimize them

a) Instead of writing

From Table
Group By Col1,Col2,Col3


COUNT(Col4) OVER (PARTITION BY <Your partition by Columns> ORDER BY (Select 1)),
SUM(Col5) OVER (PARTITION BY <Your partition by Columns> ORDER BY (Select 1))
From Table

Here we are treating Count/Sum as analytical function instead of aggreate function.

The disadvange of using the first approach is that we need to group by all other non-aggregate columns which some times is not desirable and yields wrong results/not desired ones. The second approach is better as we have the choice of grouping the columns by our own choice and at a case by case basis.

b)Avoid using UNPIVOT


select id, max(col)  as maximum_value from
select id, col from @t
unpivot (col for cols in (col1,col2,col3)) as unpivott
) as p
group by id

and use Values clause


select id,
select max(n) from
) as t(n)
) as maximum_value
from @t

Read more about the why at :

c) If possible avoid using triggers for writing the business logic.

d) Instead of creating very complicated logic which may not be optimized at the TSQL level, we can always take

the advantage of CLR functions. You may be interested to read my article ( ) as how we can create our own CLR functions.

A nice example can be parsing the JSON objects. You can look into this article ( ) as how difficult it is. Instead of that, we can take help of Newton.Json library and create our own CLR functions which is easy to debug also.

e) For running dynamic queries, avoid EXEC and adopt EXECUTESQL

f) For slow running indexes, you may check about their fragmentation statistics. Over the period of time the indexes may be fragmented. You can figure them out by running a this query ( ). Once figured out, we need to rebuild them. Look into this article ( ) to get a nice idea about the same.

Having said all of the above, it's time now to look at the query behaviours and to find out the expensive ones.

To start measuring your queries, run the following statements in SQL Server Management Studio:



Turning on the STATISTICS IO will print information about how SQL Server is interacting with the storage when we run queries.

The most useful numbers you will get are:

Logical Reads:
Physical Reads:
Read-Ahead Reads:


Truning On the STATISTICS TIME setting will print messages explaining how much CPU time was used when executing the query.

Read more at:

Also observe, the Query Execution plan ( ) that will reveal the query behaviour and also suggest hints upon which we can work on.

At times we need to find out as which queries are taking more time to execute. For that, we can take help of DMV's e.g

We can find the most expensive queries by using the DMV's



More about DMV's:

Performance Tuning with SQL Server Dynamic Management Views :

Some tools worth mentioning at this juncture are

a) SQL Server Profiler ( ) is a rich interface to create and manage traces and analyze and replay trace results. The events are saved in a trace file that can later be analyzed or used to replay a specific series of steps when trying to diagnose a problem.

Read More:

b) Database Engine Tuning Advisor : A tool that helps you figure out if additional indexes are helpful as well as partitioning.

Read More:

c) SQLQueryStress ( ).It is a free tool when comparing a before and after scenario of stored procedures. With SQLQueryStress we can execute each stored procedure as many times as we like and get the total average stats for all executions.

Having said all those, nothing beats real time experience which will come over the period of time with passion, practice, an eye for details,situation and patience.

Hope this will be helpful.

Thanks & Regards,
RNA Team

Chamu, if this helps please login to Mark As Answer. | Alert Moderator

Posted by: Chamu on: 11/27/2015 [Member] Starter | Points: 25

Thank you

Chamu, if this helps please login to Mark As Answer. | Alert Moderator

Posted by: Rajnilari2015 on: 11/29/2015 [Member] [Microsoft_MVP] [MVP] Platinum | Points: 25

Glad that the answers has helped you (:

Thanks & Regards,
RNA Team

Chamu, if this helps please login to Mark As Answer. | Alert Moderator

Login to post response