Windows policy Lock Pages in Memory option and SQL Server instance in VM

Memory

As we know, this Windows policy Lock Pages in Memory option determines which accounts can use a process to keep data in physical memory, preventing the Windows operating system from paging out a significant amount of data from physical memory to virtual memory on disk. This Windows policy is disabled by default. This Windows policy (Lock Pages in Memory option) must be enabled for SQL Server service account. That’s because, setting this option can increase the performance of SQL Server instance running on the virtual machine (VM) where paging memory to disk is expected. When not enabled, there is a risk that SQL Server buffer pool pages may be paged out from physical memory to virtual memory on disk.
*Note: Only applies to SQL Server instances running on the virtual machine (VM).

You can use the following Transact-SQL script to check whether or not this Windows policy is enabled for SQL Server Service Startup account:

SET NOCOUNT ON;

DECLARE @CMDShellFlag [bit] ,
		@CheckCommand [nvarchar](256);
		

DECLARE @xp_cmdshell_output TABLE
    (
      [output] [varchar](8000)
    );

IF NOT EXISTS ( SELECT  *
                FROM    [sys].[configurations]
                WHERE   [name] = N'xp_cmdshell'
                        AND [value_in_use] = 1 )
    BEGIN
		
        SET @CMDShellFlag = 1;

        EXEC [sp_configure] 'show advanced options', 1;

        RECONFIGURE;

        EXEC [sp_configure] 'xp_cmdshell', 1;

        RECONFIGURE;

        EXEC [sp_configure] 'show advanced options', 0;

        RECONFIGURE;
    END

SELECT  @CheckCommand = 'EXEC [master]..[xp_cmdshell]' + SPACE(1) + QUOTENAME('whoami /priv', '''');

INSERT INTO @xp_cmdshell_output
        ( [output] )
EXEC [sys].[sp_executesql] @CheckCommand;

IF EXISTS ( SELECT  *
            FROM    @xp_cmdshell_output
            WHERE   [output] LIKE '%SeLockMemoryPrivilege%enabled%' )
    SELECT  'Windows policy Lock Pages in Memory option is enabled' AS [Finding];
ELSE
    SELECT  'Windows policy Lock Pages in Memory option is disabled' AS [Finding]; 

IF @CMDShellFlag = 1
    BEGIN

        EXEC [sp_configure] 'show advanced options', 1;

        RECONFIGURE;

        EXEC [sp_configure] 'xp_cmdshell', 0;

        RECONFIGURE;

        EXEC [sp_configure] 'show advanced options', 0;

        RECONFIGURE;
    END

SET NOCOUNT OFF;

Here are instructions to enable Lock Pages in Memory option Windows policy:

  1. On the Start menu, click Run. In the Open box, type gpedit.msc.
  2. On the Local Group Policy Editor console, expand Computer Configuration, and then expand Windows Settings.
  3. Expand Security Settings, and then expand Local Policies.
  4. Select the User Rights Assignment folder. The policies will be displayed in the details pane.
  5. In the pane, double-click Lock pages in memory.
  6. In the Local Security Setting – Lock pages in memory dialog box, click Add User or Group.
  7. In the Select Users, Service Accounts, or Groups dialog box, add SQL Server Service Startup account.
  8. Restart SQL Server for this change to take effect.

For more information, refer to article “Enable the Lock Pages in Memory Option (Windows)” on the Microsoft Development Network website.

Very Important Note: You must also configure the following two server memory options, min server memory and max server memory, if you enable Windows policy Lock Pages in Memory option. That’s because these memory options help you to control the amount of server memory with SQL Server can consume. Ignoring this recommendation can severely reduce SQL Server performance and even prevent SQL Server from starting. For more information about how to configure these two memory options, refer to article “Server Memory Server Configuration Options” on the Microsoft Development Network website.

I hope you will find this post useful. 🙂

Have a nice week….

Basit

Tables without Clustered Indexes?

Today, we experienced performance issues with some of the SSRS reports that were deployed as part of the latest application/database release. While investigating this performance problem, I realized that the underlying tables for these report queries do not have clustered index. I was even more surprised when I realized that some of these tables are huge and have few million rows in them. A good practice is to have a clustered index on all tables in SQL Server, as it helps to improve query performance. This is due to the fact that clustered indexes affect a table’s physical sort order, and a table that doesn’t have a clustered index is stored in a set of data pages called a heap where:

  • Data is stored in the order in which it is entered.
  • Rows are not stored in any particular order.
  • Pages aren’t sequenced in any particular order.
  • There is not a linked list linking the data pages.

When a table has clustered index, SQL Server physically sorts table rows in clustered index order based on clustered index key column values. In short, leaf node of clustered index contains data pages, and scanning them will return the actual data rows. Therefore, table can have only one clustered index.

When to have a clustered index on table?

Although it is not mandatory to have a clustered index per table, but, according to the MSDN article (Clustered Index Design Guidelines), with few exceptions, every table should have a clustered index defined on the column or columns that used as follows:

  • The table is large and does not have nonclustered index. Having clustered index improves performance, because without it, all rows of the table should be read to find any row.
  • Column or columns are frequently queried and data is returned in sorted ordered. Having clustered index on the sorting column or columns prevents sorting operation and returns the data in sorted order.
  • Column or columns are frequently queried and data is grouped together. As data must be sorted before it is grouped, having clustered index on the sorting column or columns prevents sorting operation.
  • Column or columns data that are frequently used in queries to search data ranges from the table. Having clustered indexes on the range column will avoid sorting entire table data.

So in order to resolve these performance issues, I re-wrote these queries and created clustered indexes on tables where appropriate. Moreover, I analyse further, and used the following two queries, to find out which tables in other databases do not have a clustered index defined.

The first query return names of all tables with row count greater than specified threshold, and do not have a clustered index defined. This query inner joins sys.tables system catalog to sys.dm_db_partition_stats dynamic management view to obtain this information (See below):

DECLARE @MinTableRowsThreshold [int];

SET @MinTableRowsThreshold = 5000;

;WITH    [TablesWithoutClusteredIndexes] ( [db_name], [table_name], [table_schema], [row_count] )
          AS ( SELECT   DB_NAME() ,
                        t.[name] ,
                        SCHEMA_NAME(t.[schema_id]) ,
                        SUM(ps.[row_count])
               FROM     [sys].[tables] t
                        INNER JOIN [sys].[dm_db_partition_stats] ps
						ON ps.[object_id] = t.[object_id]
               WHERE    OBJECTPROPERTY(t.[object_id], N'TableHasClustIndex') = 0
                        AND ps.[index_id] < 2
               GROUP BY t.[name] ,
                        t.[schema_id] )
    SELECT  *
    FROM    [TablesWithoutClusteredIndexes]
    WHERE   [row_count] > @MinTableRowsThreshold;

The second query is slightly a modified version of first query and returns the names of actively queried tables with row count greater than specified threshold, and do not have a clustered index defined. This query inner joins sys.dm_db_index_usage_stats to the first query to identify actively queried tables (See below):

DECLARE @MinTableRowsThreshold [int];

SET @MinTableRowsThreshold = 5000;

;WITH    [TablesWithoutClusteredIndexes] ( [db_name], [table_name], [table_schema], [row_count] )
          AS ( SELECT   DB_NAME() ,
                        t.[name] ,
                        SCHEMA_NAME(t.[schema_id]) ,
                        SUM(ps.[row_count])
               FROM     [sys].[tables] t
                        INNER JOIN [sys].[dm_db_partition_stats] ps
						ON ps.[object_id] = t.[object_id]
			            INNER JOIN [sys].[dm_db_index_usage_stats] us
						ON ps.[object_id] = us.[object_id]
               WHERE    OBJECTPROPERTY(t.[object_id], N'TableHasClustIndex') = 0
                        AND ps.[index_id] < 2
			AND COALESCE(us.[user_seeks] ,
				         us.[user_scans] ,
				         us.[user_lookups] ,
				         us.[user_updates]) IS NOT NULL
               GROUP BY t.[name] ,
                        t.[schema_id] )
    SELECT  *
    FROM    [TablesWithoutClusteredIndexes]
    WHERE   [row_count] > @MinTableRowsThreshold;

I hope you will find this post informative. For further information about clustered index design guideline, see MSDN resource here.

Further Reading:

Get performance statistics for queries and query plans that are cached by SQL Server for faster query execution

You can use the following two DMVs to give you that information:

  • sys.dm_exec_cached_plans – You can use this dynamic management view to see information about the plans that are cached by SQL Server, along with other information such as cached query text, the amount of memory taken by cached plans, and the reuse count of the cached plans. This DMV supersedes the syscachesobjects system catalog.
  • sys.dm_exec_query_stats – You can use this dynamic management view to see the row per query statement within the cached plan. The initial run of the query may return incorrect results if a workload is running on the server. More accurate results can be determined by rerunning the query.

With these DMVs we can troubleshoot complex performance problems.

Example 1:

The following example uses sys.dm_exec_cached_plans, sys.dm_exec_sql_text and sys.dm_exec_query_plan dynamic manage views, to return all queries and query plans that are cached by SQL Server for faster query execution:

SELECT cp.[usecounts]
	,cp.[refcounts]
	,cp.[cacheobjtype]
	,CASE cp.[objtype]
		WHEN 'Proc'
			THEN 'Stored procedure'
		WHEN 'Prepared'
			THEN 'Prepared statement'
		WHEN 'Adhoc'
			THEN 'Ad hoc query'
		WHEN 'ReplProc'
			THEN 'Replication-filter-procedure'
		WHEN 'UsrTab'
			THEN 'User table'
		WHEN 'SysTab'
			THEN 'System table'
		WHEN 'Check'
			THEN 'Check constraint'
		ELSE cp.[objtype]
		END AS [object_type]
	,cp.[size_in_bytes]
	,ISNULL(DB_NAME(qt.[dbid]), 'resourcedb') AS [db_name]
	,qp.[query_plan]
	,qt.[text]
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.[plan_handle]) qt
CROSS APPLY sys.dm_exec_query_plan(cp.[plan_handle]) qp
ORDER BY cp.[usecounts] DESC;
GO

Example 2:

The following example uses sys.dm_exec_query_stats, sys.dm_exec_sql_text and sys.dm_exec_query_plan to return execution statistics about the top ten queries ranked by average CPU time.

SELECT TOP 10 SUBSTRING(qt.[text], qs.[statement_start_offset] / 2 + 1, (
			CASE 
				WHEN qs.[statement_end_offset] = - 1
					THEN LEN(CONVERT([nvarchar](max), qt.[text])) * 2
				ELSE qs.[statement_end_offset]
				END - qs.[statement_start_offset]
			) / 2) AS [query_text]
	,qp.[query_plan]
	,qs.[last_execution_time]
	,qs.[execution_count]
	,qs.[last_logical_reads]
	,qs.[last_logical_writes]
	,qs.[last_worker_time]
	,qs.[last_elapsed_time]
	,qs.[total_logical_reads]
	,qs.[total_logical_writes]
	,qs.[total_worker_time]
	,qs.[total_elapsed_time]
	,qs.[total_worker_time] / qs.[execution_count] AS [avg_cpu_time]
	,qs.[total_elapsed_time] / qs.[execution_count] AS [avg_running_time]
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
CROSS APPLY sys.dm_exec_query_plan(qs.[plan_handle]) AS qp
WHERE qt.[text] LIKE '%SELECT%'
ORDER BY [avg_cpu_time]
	,qs.[execution_count] DESC;

For more information on these dynamic management views, see “sys.dm_exec_cached_plans (Transact-SQL) ” and “sys.dm_exec_query_stats (Transact-SQL)” on MSDN website.

Monitoring Blocked Processes with Event Notifications

SQL Server is able to service requests from a large number of concurrent users. When SQL Server is servicing requests from many clients, there is a strong possibility that conflicts will occur because different processes request access to the same resources at the same time. A conflict in which one process is waiting for another to release a resource is called a block. In SQL Server, a blocked process usually resolves itself when the first process releases the resource but there are times when a process holds a transaction lock and doesn’t release it. Blocking typically occurs more frequently with increasing transaction volumes.

Fortunately, SQL Server provides variety of different tools, which helps database administrators (DBA) and developers to identifying blocked and blocking processes on SQL Server instance that are listed as follow:

Out of all these options Extended Events and Event Notifications are the most efficient and modern way to capture the blocking information.

Event Notifications were introduced in SQL Server 2005 and offer the ability to collect a very specific subset of SQL Trace and DDL Events through a Service Broker service and queue. SQL Server Extended Events (Extended Events) is a general event-handling system for server systems. The key difference between the two features is that Event Notifications allow automated processing of the events asynchronously through service Broker. There is no similar mechanism even in SQL Server 2012 for Extended Events, which is why Event Notifications SQL Trace events still exists for event generation.

As being a fan of Event Notification and service broker, I usually use these two features to proactively monitoring blocked processes information. Check out my article here in which I demonstrated the steps, which you can follow to set-up Event Notification to proactively capture blocked process information.

This article is published on SSWUG.org.

Using sys.dm_db_index_operational_stats to analyse how indexes are utilised

This dynamic management function (DMF) returns the detailed information about low level activities on indexes such as input/output (I/O) operations, locking and latching contention, and access methods.

You can also use sys.dm_db_index_operational_stats function to find information such as how long the users must wait to read or write to a table, index, or partition, and identify the tables or indexes that are encountering significant I/O activity.

The following is the general syntax of this dynamic management function (DMF):

sys.dm_db_index_operational_stats (
     { database_id | NULL | 0 | DEFAULT }
   , { object_id | NULL | 0 | DEFAULT }
   , { index_id | 0 | NULL | -1 | DEFAULT }
   , { partition_number | NULL | 0 | DEFAULT })

Example:

Query 1: Execute to find out blocking per database object:

SELECT DB_NAME([database_id]) AS [Database]
	,iops.[object_id] AS [ObjectID]
	,QUOTENAME(OBJECT_SCHEMA_NAME(iops.[object_id], [database_id])) + N'.' + QUOTENAME(OBJECT_NAME(iops.[object_id], [database_id])) AS [ObjectName]
	,i.[index_id] AS [IndexID]
	,i.[name] AS [IndexName]
	,i.[fill_factor] AS [IndexFillFactor]
	,iops.[partition_number] AS [PartitionNumber]
	,CASE 
		WHEN i.[is_unique] = 1
			THEN 'UNIQUE '
		ELSE ''
		END + i.[type_desc] AS [IndexType]
	,iops.[row_lock_count] AS [RowLockCount]
	,iops.[row_lock_wait_count] AS [RowLockWaitCount]
	,CAST(100.0 * iops.[row_lock_wait_count] / (iops.[row_lock_count] + 1) AS NUMERIC(15, 2)) AS [BlockedPercent]
	,iops.[row_lock_wait_in_ms] AS [RowLockWaitInMilliseconds]
	,CAST(1.0 * iops.[row_lock_wait_in_ms] / (1 + iops.[row_lock_wait_count]) AS NUMERIC(15, 2)) AS [AverageRowLockWaitInMilliseconds]
FROM [sys].[dm_db_index_operational_stats](DB_ID(), NULL, NULL, NULL) iops
INNER JOIN [sys].[indexes] i
	ON i.[object_id] = iops.[object_id]
		AND i.[index_id] = iops.[index_id]
		AND iops.[row_lock_wait_count] > 0
WHERE OBJECTPROPERTY(iops.[object_id], 'IsUserTable') = 1
ORDER BY iops.[row_lock_wait_count] DESC;

Query 2 – Execute to analyse statistics of physical I/Os on an index or heap partition:

SELECT DB_NAME([database_id]) AS [Database]
	,iops.[object_id] AS [ObjectID]
	,QUOTENAME(OBJECT_SCHEMA_NAME(iops.[object_id], [database_id])) + N'.' + QUOTENAME(OBJECT_NAME(iops.[object_id], [database_id])) AS [ObjectName]
	,i.[name] AS [IndexName]
	,CASE 
		WHEN i.[is_unique] = 1
			THEN 'UNIQUE '
		ELSE ''
		END + i.[type_desc] AS [IndexType]
	,iops.[page_latch_wait_count] AS [PageLatchWaitCount]
	,iops.[page_io_latch_wait_count] AS [PageIOLatchWaitCount]
	,iops.[page_io_latch_wait_in_ms] AS [PageIOLatchWaitInMilliseconds]
FROM [sys].[dm_db_index_operational_stats](DB_ID(), NULL, NULL, NULL) iops
INNER JOIN [sys].[indexes] i
	ON i.[object_id] = iops.[object_id]
		AND i.[index_id] = iops.[index_id]
ORDER BY iops.[page_latch_wait_count] + iops.[page_io_latch_wait_count] DESC

Query 3 – Execute to find how many times Database Engine row or page lock:

SELECT DB_NAME([database_id]) AS [Database]
	,iops.[object_id] AS [ObjectID]
	,QUOTENAME(OBJECT_SCHEMA_NAME(iops.[object_id], [database_id])) + N'.' + QUOTENAME(OBJECT_NAME(iops.[object_id], [database_id])) AS [ObjectName]
	,iops.[row_lock_count] AS [RowLockCount]
	,iops.[page_lock_count] AS [PageLockCount]
FROM [sys].[dm_db_index_operational_stats](DB_ID(), NULL, NULL, NULL) iops
INNER JOIN [sys].[indexes] i
	ON i.[object_id] = iops.[object_id]
		AND i.[index_id] = iops.[index_id]
ORDER BY iops.[page_latch_wait_count] + iops.[page_io_latch_wait_count] DESC

Query 4 – Execute to determine number of update, insert, and delete operations against each index of the database:

SELECT DB_NAME([database_id]) AS [Database]
	,iops.[object_id] AS [ObjectID]
	,QUOTENAME(OBJECT_SCHEMA_NAME(iops.[object_id], [database_id])) + N'.' + QUOTENAME(OBJECT_NAME(iops.[object_id], [database_id])) AS [ObjectName]
	,i.[index_id] AS [IndexID]
	,i.[name] AS [IndexName]
	,i.[fill_factor] AS [IndexFillFactor]
	,iops.[partition_number] AS [PartitionNumber]
	,CASE 
		WHEN i.[is_unique] = 1
			THEN 'UNIQUE '
		ELSE ''
		END + i.[type_desc] AS [IndexType]
	,iops.[leaf_insert_count] AS [LeafInsertCount]
	,iops.[leaf_delete_count] AS [LeafDeleteCount]
	,iops.[leaf_update_count] AS [LeafUpdateCount]
FROM [sys].[dm_db_index_operational_stats](DB_ID(), NULL, NULL, NULL) iops
INNER JOIN [sys].[indexes] i
	ON i.[object_id] = iops.[object_id]
		AND i.[index_id] = iops.[index_id]
ORDER BY iops.[page_latch_wait_count] + iops.[page_io_latch_wait_count] DESC

As we have seen from above queries that sys.dm_db_index_operational_stats dynamic management function provides us the current low-level I/O, locking, latching, and access method for each partition of the table. This information is really useful to troubleshoot SQL Server performance issues.

For more information about this dynamic management function (DMF), see sys.dm_db_index_operational_stats (Transact-SQL).

Find missing indexes using SQL Servers index related DMVs

Today, we experienced performance issues with few databases that are hosted on one of our most critical production SQL Server. Upon reviewing the query execution plan and querying index related dynamic management views (DMVs), I noticed the problem is related with potential missing indexes on columns. The index related dynamic management views (DMVs) I queried are as follow:

  • sys.dm_db_missing_index_details — Returns detailed information about missing indexes, including the table, columns used in equality operations, columns used in inequality operations, and columns used in include operations.
  • sys.dm_db_missing_index_group_stats — Returns information about groups of missing indexes, which SQL Server updates with each query execution (not based on query compilation or recompilation).
  • sys.dm_db_missing_index_groups — Returns information about missing indexes contained in a missing index group.

Using these dynamic management views (DMVs), I wrote the following query, which returns the list of possible missing indexes for all SQL Server user databases. The results are ordered by index advantage that helps you to identify how beneficial each index would be, if we create them on the table.

SELECT CAST(SERVERPROPERTY('ServerName') AS [nvarchar](256)) AS [SQLServer]
	,db.[database_id] AS [DatabaseID]
	,db.[name] AS [DatabaseName]
	,id.[object_id] AS [ObjectID]
	,id.[statement] AS [FullyQualifiedObjectName]
	,id.[equality_columns] AS [EqualityColumns]
	,id.[inequality_columns] AS [InEqualityColumns]
	,id.[included_columns] AS [IncludedColumns]
	,gs.[unique_compiles] AS [UniqueCompiles]
	,gs.[user_seeks] AS [UserSeeks]
	,gs.[user_scans] AS [UserScans]
	,gs.[last_user_seek] AS [LastUserSeekTime]
	,gs.[last_user_scan] AS [LastUserScanTime]
	,gs.[avg_total_user_cost] AS [AvgTotalUserCost]
	,gs.[avg_user_impact] AS [AvgUserImpact]
	,gs.[system_seeks] AS [SystemSeeks]
	,gs.[system_scans] AS [SystemScans]
	,gs.[last_system_seek] AS [LastSystemSeekTime]
	,gs.[last_system_scan] AS [LastSystemScanTime]
	,gs.[avg_total_system_cost] AS [AvgTotalSystemCost]
	,gs.[avg_system_impact] AS [AvgSystemImpact]
	,gs.[user_seeks] * gs.[avg_total_user_cost] * (gs.[avg_user_impact] * 0.01) AS [IndexAdvantage]
	,'CREATE INDEX [Missing_IXNC_' + OBJECT_NAME(id.[object_id], db.[database_id]) + '_' + REPLACE(REPLACE(REPLACE(ISNULL(id.[equality_columns], ''), ', ', '_'), '[', ''), ']', '') + CASE 
		WHEN id.[equality_columns] IS NOT NULL
			AND id.[inequality_columns] IS NOT NULL
			THEN '_'
		ELSE ''
		END + REPLACE(REPLACE(REPLACE(ISNULL(id.[inequality_columns], ''), ', ', '_'), '[', ''), ']', '') + '_' + LEFT(CAST(NEWID() AS [nvarchar](64)), 5) + ']' + ' ON ' + id.[statement] + ' (' + ISNULL(id.[equality_columns], '') + CASE 
		WHEN id.[equality_columns] IS NOT NULL
			AND id.[inequality_columns] IS NOT NULL
			THEN ','
		ELSE ''
		END + ISNULL(id.[inequality_columns], '') + ')' + ISNULL(' INCLUDE (' + id.[included_columns] + ')', '') AS [ProposedIndex]
	,CAST(CURRENT_TIMESTAMP AS [smalldatetime]) AS [CollectionDate]
FROM [sys].[dm_db_missing_index_group_stats] gs WITH (NOLOCK)
INNER JOIN [sys].[dm_db_missing_index_groups] ig WITH (NOLOCK)
	ON gs.[group_handle] = ig.[index_group_handle]
INNER JOIN [sys].[dm_db_missing_index_details] id WITH (NOLOCK)
	ON ig.[index_handle] = id.[index_handle]
INNER JOIN [sys].[databases] db WITH (NOLOCK)
	ON db.[database_id] = id.[database_id]
WHERE id.[database_id] > 4 -- Remove this to see for entire instance
ORDER BY [IndexAdvantage] DESC
OPTION (RECOMPILE);

01
02
03
Obviously these missing indexes are the ones that the SQL Server optimizer identified during query compilation, and these missing index recommendations are specific recommendation targeting a specific query.  Consider submitting your workload and the proposed index to the Database Tuning Advisor for further evaluation that include partitioning, choice of clustered versus non-clustered index, and so on.

For more information about Database Tuning Advisor, see Start and Use the Database Engine Tuning Advisor and Tutorial: Database Engine Tuning Advisor.

SQL Server Locking Overview

Locking is a necessary part of transaction processing when working in a multi-user Online Transaction Processing (OLTP) environment. You use locks to prevent update conflicts. While one user is updating data, locks prevent other users from accessing the data being updated. Locks help to prevent:

  • Lost updates: Occurs when two transactions update the same data at the same time. Changes are saved for the last transaction that writes to the database, overwriting the other transaction’s changes.
  • Dirty reads: Occurs when a transaction reads another transaction’s uncommitted data. This can lead to making inaccurate changes to the data. This is also known as an uncommitted dependency.
  • Nonrepeatable reads: Row data changes between reads. This is also referred to as inconsistent analysis.
  • Phantoms: A record appears when a transaction reads back through data after making a change.

You can serialize transactions with locks, meaning that only one person can change a data element, such as a specific row, at one time. SQL Server can issue a lock for:

  • A row identifier (RID), locking a single row in a table.
  • A key, which is a row lock within an index.
  • A table, which locks all data rows and indexes.
  • A database, which is used when restoring a database.
  • A page, locking an 8-KB data or index page.
  • An extent, locking a contiguous group of pages during space allocation.

SQL Server selects a locking level appropriate to the current data manipulation or definition action. For example, SQL Server uses a row lock when updating a single data row in a table. SQL Server uses dynamic lock management, which means that the locking level can be adjusted automatically as needed. You can use the sys.dm_tran_locks dynamic management view to obtain information about active locks.

Basic locks

SQL Server supports the following types of locks. Shared and exclusive locks are the basic locks supported by SQL Server.

  • Shared locks (S): Used when performing read-only operations against database. Resources locked with a shared lock are available for SELECT, but not for modification.
  • Exclusive locks (X): Used for operations that modifies data. INSERT, UPDATE, and DELETE require exclusive locks. No more than one transaction can have an exclusive lock on a resource. If an exclusive lock exists on a resource, no other transaction can access that resource.
  • Intent lock: Establishes a locking hierarchy. For example, if a transaction has an exclusive lock on a row, SQL Server places an intent lock on the table. When another transaction requests a lock on a row in the table, SQL Server knows to check the rows to see if they have locks. If a table does not have intent lock, it can issue the requested lock without checking each row for a lock.
  • Update lock (U): Placed on a page before performing an update. When SQL Server is ready to update the page, the lock will be promoted to an exclusive page lock.
  • Schema lock: Used to prevent a table or index that is being used by another session from being dropped or its schema being modified. When a resource is locked with a schema lock, the object cannot be accessed.
  • Bulk update locks (BU): Used to prevent other processes from accessing a table while a bulk load procedure is being processed. It will, however, allow processing of concurrent bulk load processes, which allows you to run parallel loads. A bulk load procedure is one performed by using bulk copy program (bcp) or BULK INSERT.

Optimistic and pessimistic locking

Two terms are commonly used to describe locking methods:

  • Pessimistic locking locks resources as they are acquired, holding the locks for the duration of the transaction. Pessimistic locking is more likely to cause deadlocks. A deadlock occurs with two transactions when each blocks access to resources needed by the other.
  • Optimistic locking assumes that conflicts between transactions are unlikely but might occur. Transactions are allowed to execute without locking resources. The only time resources are checked for a conflict is when data changes are made. If a conflict occurs, the transaction is rolled back.

Row versioning

Microsoft introduced row versioning as an alternative to shared locks in SQL Server 2005. With row versioning, rows are read into tempdb at the beginning of a transaction and the transaction uses that copy of those rows throughout the duration of the transaction. Row versioning protects the transaction from:

  • Dirty reads
  • Nonrepeatable reads
  • Phantoms

Even when row versioning is used, SQL Server still takes an exclusive lock on a row before updating it.

Row versioning allows for optimum concurrency (multiple users accessing data at the same time), while providing good protection. However, you must ensure that tempdb has sufficient disk space available. Using row versioning might also degrade performance because of the resources required to move data in and out of tempdb.

For more information, see Lock Modes.