This question has already been answered in DNF earlier ( http://www.dotnetfunda.com/forums/show/20833/how-do-you-improve-the-performance-of-a-stored-procedure
An excerpt from that post is listed below -
1) Never name the user defined store procedures with "sp_". They are system defined and mostly resides under the master db. So if we write a user defined stored procedure by the name "sp_" the query engine will first search the sp indside the master db and if not found then it will search in the current session db. This brings unnecessary round trip. Better to use some other naming convention as "usp_".
2) Avoid * in SELECT statement since SQL Server converts the * to columns name before query execution. One more thing, instead of querying all columns by using * in select statement, give the name of columns which you required.
3) Use EXISTS instead of IN clause since EXISTS is faster than IN Clause
4) Avoid using user defined functions. Instead use Apply (Outer/Cross) clauses
5) Try to maintain as small clustered index as much as possible since the fields used in clustered index may also used in nonclustered index.Data in the database is also stored in the order of clustered index.So, maintaining a huge clustered index on a table with a large number of rows increases the size drastically. You may be interested in reading this article "Effective Clustered Indexes" ( https://www.simple-talk.com/sql/learn-sql-server/effective-clustered-indexes/ )
6) Avoid cursor since they are very slow in performance. Instead try to write SET based TSQL queries and instead of a Row by Row approach, the Set based approach submits the batch at a time to the query engine there by boosts the query performance.
7) Try to use less number of Temp tables.Instead use Common table expressions.Though it depends.
8) Never write a recursive Stored procedure. Instead use a Recursive Common Table Expression.
9) Try to avoid writing co-related subquery since for every outer row, the inner row gets evaluated.
10) Use TRY-CATCH for handling errors in T-SQL statements.Or if you are using SQL Server Version 2012 onwards, you can use the Throw statement, which is even better.
11) If possible perform a filter by using a where clause and then perform JOIN and not the reverse.
12) Instead of writing Select col1,col2 from TableName write Select col1,col2 from TableName with NOLOCK as the later solves Share lock issue. However, please don't use it for DML statements like (INSERT/UPDATE/DELETE/MERGE).
13) Use UNION ALL in place of UNION. UNION sort the result set for distinguished values but not UNION ALL hence it is faster.
14) Use schema naming convention as it helps the SQL Server for finding that object in a specific schema.
15) Choose appropriate FILL factors
16) If you need to perform a huge join, avaoid it. Create views and apply the Projection on that.
17) Use clustered index while designing the DB design and that won't yield HEAP tables.
18) Try to avoid NCHAR/NVARCHAR and use CHAR/VARCHAR as the former will unnecessary cosumes 2 bytes more as compared to the later since the former is meant for unicode characters storage.
19) Avoid using VARCHAR(MAX) because we cannot apply indexing on that.
20) Try to keep small transactions since it locks the processing tables and may sometime also cause deadlocks.
21) Avoid using Loop(While). The reason is mentioned in point (f)
22) Instead of creating several non-clustered index, try using Covering Index because it can satisfy all requested columns in a query without performing a further lookup into the clustered index.If all the columns requested in the select list of query, are available in the index, then the query engine doesn't have to lookup the table again which can significantly increase the performance of the query. Since all the requested columns are available with in the index, the index is covering the query. It enhances the query performance by avoiding unnecessary lookups. (Read More: https://www.simple-talk.com/sql/learn-sql-server/using-covering-indexes-to-improve-query-performance/ )
23) Avoid distinct clause for limiting rows.Better use the Ranking functions like ROW_NUMBER ( http://www.dotnetfunda.com/articles/show/1925/10-reasons-for-which-we-should-learn-and-use-rownumber-function-of-sql ).
24) For pagination, you can use Fetch/Offset ( http://beyondrelational.com/modules/2/blogs/80/posts/10632/offset-and-fetch-firstnext-clause-extension-to-order-by-clause-in-sql-11code-name-denali-sql-server-.aspx) instead of ROW_NUMBER. The Fetch/Offset is a better choice as mentioned in the article link provided.
25) Use judiciously Force Index Query Hints ( http://blog.sqlauthority.com/2009/02/07/sql-server-introduction-to-force-index-query-hints-index-hint/ )
26) Instead of writing
Group By Col1,Col2,Col3
COUNT(Col4) OVER (PARTITION BY <Your partition by Columns> ORDER BY (Select 1)),
SUM(Col5) OVER (PARTITION BY <Your partition by Columns> ORDER BY (Select 1))
Here we are treating Count/Sum as analytical function instead of aggreate function.
The disadvange of using the first approach is that we need to group by all other non-aggregate columns which some times is not desirable and yields wrong results/not desired ones. The second approach is better as we have the choice of grouping the columns by our own choice and at a case by case basis.
27)Avoid using UNPIVOT
select id, max(col) as maximum_value from
select id, col from @t
unpivot (col for cols in (col1,col2,col3)) as unpivott
) as p
group by id
and use Values clause
select max(n) from
) as t(n)
) as maximum_value
Read more about the why at : http://bradsruminations.blogspot.in/2010/02/spotlight-on-unpivot-part-1.html
28) If possible avoid using triggers for writing the business logic.
29) Instead of creating very complicated logic which may not be optimized at the TSQL level, we can always take
the advantage of CLR functions. You may be interested to read my article ( http://www.codeproject.com/Articles/243475/An-Introduction-to-Sql-Code-Name-Denali-Part-VI#3.4.2
) as how we can create our own CLR functions.
A nice example can be parsing the JSON objects. You can look into this article ( https://www.simple-talk.com/sql/t-sql-programming/consuming-json-strings-in-sql-server/
) as how difficult it is. Instead of that, we can take help of Newton.Json library and create our own CLR functions which is easy to debug also.
30) For running dynamic queries, avoid EXEC and adopt EXECUTESQL
31) For slow running indexes, you may check about their fragmentation staKumarkrishna184, if this helps please login to Mark As Answer. | Alert Moderator