SQL SERVER 2017 HOSTING – Easy Ways To Speed Up Your SQL Server
|Use SET NOCOUNT ON inside ALL your Stored Procedures
By default, every time a stored procedure is executed, a message is sent from the server to the client indicating the number of rows that were affected by the stored procedure. Rarely is this information useful to the client. By turning off this default behavior, you can reduce network traffic between the server and the client, helping to boost overall performance of your server and applications.
To turn this feature off on at the stored procedure level, include the statement:
SET NOCOUNT ON
at the beginning of each stored procedure you write. This statement should be included in every stored procedure you write.
Don’t Name any of Your Stored Procedures with ‘sp_’ at the start
If you are creating a stored procedure to run in a database other than the Master database, don’t use the prefix “sp_” in its name. This special prefix is reserved for system stored procedures. Although using this prefix will not prevent a user defined stored procedure from working, what it can do is to slow down its execution ever so slightly.
The reason for this is that by default, any stored procedure executed by SQL Server that begins with the prefix “sp_”, is first attempted to be resolved from within the Master database. Since it is not there, time is wasted looking for the stored procedure.
If SQL Server cannot find the stored procedure in the Master database, then it next tries to resolve the stored procedure name as if the owner of the object is “dbo”. Assuming the stored procedure is in the current database, it will then execute. To avoid this unnecessary delay, don’t name any of your stored procedures with the prefix “sp_”.
Adjust the Index Fillfactor setting
Closely related to index rebuilding is the fillfactor. When you create a new index, or rebuild an existing index, you can specify a fillfactor, which is the amount the data pages in the index are filled when they are created. A fillfactor of 100 means that each index page is 100% full, a fillfactor of 50% means each index page is 50% full.
If you create a clustered index (on a non-monotonically ascending column) that has a fillfactor of 100, that means that each time a record is inserted (or perhaps updated), page splits will occur because there is no room for the data in the existing pages. Numerous page splits can slow down SQL Server’s performance.
Here’s an example: Assume that you have just created a new index on a table with the default fillfactor. When SQL Server creates the index, it places the index on contiguous physical pages, which allows optimal I/O access because the data can be read sequentially. But as the table grows and changes with INSERTS, UPDATES, and DELETES, page splitting occurs. When pages split, SQL Server must allocate new pages elsewhere on the disk, and these new pages are not contiguous with the original physical pages. Because of this, random I/O, not sequential I/O access must be used, which is much slower, to access the index pages.
So what is the ideal fillfactor? It depends on the ratio of reads to writes that your application makes to your SQL Server tables. As a rule of thumb, follow these guidelines:
- Low Update Tables (100-1 read to write ratio): 100% fillfactor
- High Update Tables (where writes exceed reads): 50%-70% fillfactor
- Everything In-Between: 80%-90% fillfactor.
You may have to experiment to find the optimum fillfactor for your particular application. Don’t assume that a low fillfactor is always better than a high fillfactor. While page splits will be reduced with a low fillfactor, it also increase the number of pages that have to be read by SQL Server during queries, which reduces performance. And not only is I/O overhead increased with a too low of fillfactor, it also affects your buffer cache. As data pages are moved in from disk to the buffer, the entire page (including empty space) is moved to the buffer. So the lower the fillfactor, the more pages that have to be moved into SQL Serve’s buffer, which means there is less room for other important data pages to reside at the same time, which can reduce performance.
If you don’t specify a fillfactor, the default fillfactor is 0, which means the same as a 100% fillfactor, (the leaf pages of the index are filled 100%, but there is some room left on intermediate index pages).
As part of your audit process, you need to determine what fillfactor is being used to create new indexes and rebuild current indexes. In virtually all cases, except for read-only databases, the default value of 0 is not appropriate. Instead, you will want a fillfactor that leaves an appropriate amount of free space, as discussed above.
How To:
Step 1 – To view what the fillfactor is currently set to on your sql server. Run the following query in SQL Query Analyzer:
SP_CONFIGURE
Step 2 – Some of the configuration settings are considered “advanced” settings. Before you can change these options using the SP_CONFIGURE command, you must first change one of the SQL Server configuration settings to allow you to change them. Run the following query in SQL Query Analyzer:
SP_CONFIGURE ‘show advanced options’, 1
GO
RECONFIGURE
GO
Step 3 – Most, but not all, of the SQL Server configuration settings can be changed using Enterprise Manager. But one of the easiest ways to change any of these settings is to use the SP_CONFIGURE command. Run the following query in SQL Query Analyzer:
SP_CONFIGURE ‘fill factor (%)’, [configuration setting value]
GO
RECONFIGURE WITH OVERRIDE
GO
Note: Do not include the square brackets ‘[‘, ‘]’ when running the query. Requires server reboot.
Adjust the Min Memory Per Query setting
When a query runs, SQL Server does its best to allocate the optimum amount of memory for it to run efficiently and quickly. By default, the “minimum memory per query” setting allocates 1024 KB, as a minimum, for each query to run. The “minimum memory per query” setting can be set from 0 to 2147483647 KB.
If a query needs more memory to run efficiently, and if it is available, then SQL Server automatically assign more memory to the query. Because of this, changing the value of the “minimum memory per query” default setting is generally not advised.
In some cases, if your SQL Server has more RAM than it needs to run efficiently, the performance of some queries can be boosted if you increase the “minimum memory per query” setting to a higher value, such as 2048 KB, or perhaps a little higher. As long as there is “excess” memory available in the server (essentially, RAM that is not being used by SQL Server), then boosting this setting can help overall SQL Server performance. But if there is no excess memory available, increasing the amount of memory for this setting is more likely to hurt overall performance, not help it.
How To:
Step 1 – To view what the Min Memory Per Query is currently set to on your sql server. Run the following query in SQL Query Analyzer:
SP_CONFIGURE
Step 2 – Some of the configuration settings are considered “advanced” settings. Before you can change these options using the SP_CONFIGURE command, you must first change one of the SQL Server configuration settings to allow you to change them. Run the following query in SQL Query Analyzer:
SP_CONFIGURE ‘show advanced options’, 1
GO
RECONFIGURE
GO
Step 3 – Most, but not all, of the SQL Server configuration settings can be changed using Enterprise Manager. But one of the easiest ways to change any of these settings is to use the SP_CONFIGURE command. Run the following query in SQL Query Analyzer:
SP_CONFIGURE ‘min memory per query (KB)’, [configuration setting value]
GO
RECONFIGURE WITH OVERRIDE
GO
Note: Do not include the square brackets ‘[‘, ‘]’ when running the query.