Stop the Lock Madness with Optimized Locking

What even is Optimized Locking?

If you've spent any time tuning busy OLTP workloads on SQL Server, you've made peace with locks. LCK_M_X waits in your DMVs, blocking chains three levels deep, that one rogue UPDATE holding an exclusive row lock while twelve other sessions twiddle their thumbs. Classic.

SQL Server 2025 ships with Optimized Locking, a fundamental rethink of how the engine acquires and holds row-level locks. The pitch: dramatically fewer locks in memory, less blocking, and better throughput on concurrent workloads, all without changing a single line of application code.

Optimized Locking reduces the number of row locks held at any point in time, sometimes by orders of magnitude, by combining two complementary techniques: Transaction ID (TID) locking and Lock After Qualification (LAQ)

It first appeared as a preview feature in Azure SQL Database and SQL Server 2022, and it's now production-ready and on by default for new databases in SQL Server 2025 (when prerequisites are met). Let's dig in.


Required settings: the full checklist

Optimized Locking isn't a standalone toggle. It depends on the database being in a specific state. All four of the following must be true simultaneously:

01 / Required
Compatibility Level 160+
The database must be at compat level 160 (SQL Server 2022) or 170 (SQL Server 2025). Older compat levels opt out entirely.

ALTER DATABASE StackOverflow SET COMPATIBILITY_LEVEL = 170;
02 / Required
RCSI Enabled
Read Committed Snapshot Isolation must be ON. This gives readers a row version instead of a shared lock, freeing up the TID machinery to work.

ALTER DATABASE StackOverflow SET READ_COMMITTED_SNAPSHOT ON;
03 / Required
Accelerated Database Recovery
ADR must be enabled. It provides the per-transaction versioning store that TID locking depends on for rollback support.

ALTER DATABASE StackOverflow SET ACCELERATED_DATABASE_RECOVERY = ON;
04 / Required
User database only
Optimized Locking applies to user databases only. master, msdb, tempdb, and model are excluded regardless of settings.
Watch out Turning on RCSI requires brief exclusive database access (WITH ROLLBACK AFTER 5 is your friend). ADR uses the Persistent Version Store (PVS) inside the user database, so size your data files accordingly on busy systems.

Here's the full setup script to run against a fresh Stack Overflow database copy:

T-SQL · setup.sql
-- Step 1: Compatibility level (170 = SQL Server 2025)
ALTER DATABASE StackOverflow
  SET COMPATIBILITY_LEVEL = 170;

-- Step 2: Enable RCSI (readers won't block writers)
ALTER DATABASE StackOverflow
  SET READ_COMMITTED_SNAPSHOT ON
  WITH ROLLBACK AFTER 5;  -- give active connections 5 seconds

-- Step 3: Enable Accelerated Database Recovery
ALTER DATABASE StackOverflow
  SET ACCELERATED_DATABASE_RECOVERY = ON;

-- Step 4: Verify all three prerequisites at once
SELECT
    name,
    DATABASEPROPERTYEX(name, 'IsOptimizedLockingOn')          AS optimized_locking_on,
    DATABASEPROPERTYEX(name, 'IsReadCommittedSnapshotOn')       AS rcsi_on,
    DATABASEPROPERTYEX(name, 'IsAcceleratedDatabaseRecoveryOn') AS adr_on
FROM  sys.databases
WHERE name = 'StackOverflow';

You should see 1 for all three columns once you're set up. If optimized_locking_on comes back 0, one of the prerequisites isn't met. Go back and check each one individually.


How it actually works

Part 1: Transaction ID (TID) Locking

Traditionally, when you update a row, SQL Server takes an exclusive (X) lock on that row's RID or key and holds it until the transaction commits or rolls back. Update 500,000 rows and you've got 500,000 row locks sitting in the lock manager. That's a lot of memory and a lot of contention surface.

With TID locking, the engine assigns a lightweight Transaction ID to each transaction and stamps it directly on modified rows in the row header. A reader who encounters a TID-stamped row takes a short-lived shared lock on the TID itself (not the row) and releases it immediately after checking the transaction's status. The "is this row safe to read?" question gets answered in microseconds instead of blocking until commit.

The mental model

Old way: put a "DO NOT TOUCH" sticky note on every single row you're modifying. Thousands of sticky notes, everyone has to check each one.

New way: put one badge on yourself that says "I'm in transaction #8472." Anyone who bumps into a row you touched just glances at your badge, checks if #8472 is still active, and moves on. One badge instead of thousands of sticky notes.

Part 2: Lock After Qualification (LAQ)

LAQ attacks a different problem: unnecessary locking during row scans. In the old model, SQL Server would acquire a lock on a row before evaluating the WHERE clause. So if you're running:

T-SQL
UPDATE dbo.Posts
SET    Score = Score + 1
WHERE  OwnerUserId = 22656  -- Jon Skeet, obviously
  AND  PostTypeId  = 1;

...the old engine would lock every row it touched during the scan, even rows that didn't match OwnerUserId = 22656. Lock, evaluate, discard. For every non-qualifying row. On a table scan of Posts looking for one user's posts, that's the difference between millions of transient lock acquisitions and a few thousand.

With LAQ, the engine evaluates the predicate first using the row version (courtesy of RCSI) and only acquires an exclusive lock if the row actually qualifies. Non-qualifying rows are never locked.

Without Optimized Locking

X lock acquired before WHERE evaluation
Locks held on non-qualifying rows during scan
Hundreds of thousands of row locks on large updates
Lock memory spikes under concurrency
Long-running transactions create long blocking chains

With Optimized Locking

Rows evaluated via row version before locking
X lock only taken on rows matching the predicate
TID stamp replaces thousands of individual row locks
Lock memory reduced by 10x to 100x on wide updates
Readers block for microseconds on TID check, not seconds

Demos with the Stack Overflow database

Enough theory. Let's get our hands dirty. These demos assume you've run the setup script above and you're on the Brent Ozar Stack Overflow restore.

Demo 1: Observe lock counts before and after

Open three query windows. Window 1 starts a transaction that updates Posts and leaves it open. Window 2 counts the locks. Window 3 proves readers don't block.

T-SQL · Window 1: open transaction
USE StackOverflow;
BEGIN TRANSACTION;

-- No-op update just to generate locks
UPDATE dbo.Posts
SET    Score = Score + 0
WHERE  OwnerUserId = 22656;

-- DON'T COMMIT YET, leave this window open
-- ROLLBACK TRANSACTION;  <-- run this when done
T-SQL · Window 2: count the locks
USE StackOverflow;

SELECT
    tl.request_mode,
    tl.resource_type,
    COUNT(*) AS lock_count
FROM   sys.dm_tran_locks   tl
JOIN   sys.dm_exec_sessions es
    ON tl.request_session_id = es.session_id
WHERE  es.is_user_process = 1
  AND  tl.request_status  = 'GRANT'
GROUP BY tl.request_mode, tl.resource_type
ORDER BY lock_count DESC;

-- Optimized Locking: TAB/PAG/DB + a handful of XACT (TID) locks
-- Without it: thousands of KEY locks. Big difference.
T-SQL · Window 3: readers don't block
USE StackOverflow;

-- Runs freely even while Window 1's transaction is open.
-- RCSI + Optimized Locking = readers see last committed version,
-- no waiting for the writer to finish.
SELECT TOP 10
    p.Id, p.Title, p.Score, p.CreationDate
FROM  dbo.Posts p
WHERE p.OwnerUserId = 22656
ORDER BY p.Score DESC;
-- Returns immediately. No blocking. Beautiful.

Demo 2: Viewing TID locks directly

SQL Server 2025 exposes TID locks as a new XACT resource type in sys.dm_tran_locks. Here's how to see them while an open transaction is running:

T-SQL · Viewing TID / XACT locks
USE StackOverflow;

-- Window 1 must still have its open transaction from Demo 1

SELECT
    tl.resource_type,       -- look for 'XACT' rows
    tl.resource_description,
    tl.request_mode,
    tl.request_status,
    es.session_id
FROM   sys.dm_tran_locks    tl
JOIN   sys.dm_exec_sessions es
    ON tl.request_session_id = es.session_id
WHERE  tl.resource_type   = 'XACT'
  AND  es.is_user_process = 1;

-- One XACT row per open transaction, not one per row modified.
-- That's the whole point.

Demo 3: LAQ lock counts on a selective update

Use Extended Events to count KEY lock acquisitions during a selective update. The difference vs. the old model shows up clearly here because the Votes table is large and the predicate is narrow.

T-SQL · LAQ demo with XE
USE StackOverflow;

-- Create an XE session to count KEY lock acquisitions
CREATE EVENT SESSION [LockCountDemo] ON SERVER
ADD EVENT sqlserver.lock_acquired(
    WHERE sqlserver.database_name = N'StackOverflow'
      AND resource_type = 3  -- KEY locks only
)
ADD TARGET package0.ring_buffer(SET max_memory = 51200);

ALTER EVENT SESSION [LockCountDemo] ON SERVER STATE = START;
GO

BEGIN TRANSACTION;
UPDATE dbo.Votes
SET    BountyAmount = 50
WHERE  UserId      = 22656
  AND  VoteTypeId  = 9;
ROLLBACK;
GO

-- Read the ring buffer: how many KEY locks were acquired?
SELECT
    target_data.value('(RingBufferTarget/@eventCount)[1]', 'INT') AS key_locks_acquired
FROM (
    SELECT CAST(t.target_data AS XML) AS target_data
    FROM  sys.dm_xe_sessions         s
    JOIN  sys.dm_xe_session_targets   t
        ON s.address = t.event_session_address
    WHERE s.name = 'LockCountDemo'
) x;

DROP EVENT SESSION [LockCountDemo] ON SERVER;

-- With LAQ: only qualifying rows get KEY locks.
-- Without: every scanned row gets a transient lock before predicate eval.

Demo 4: Two concurrent writers, overlapping rows

The classic OLTP nightmare: two sessions updating overlapping sets of data at the same time. See how Optimized Locking handles it.

T-SQL · Window A: first writer (leave open)
USE StackOverflow;
BEGIN TRANSACTION;

UPDATE dbo.Posts
SET    Score        = Score + 1,
       LastEditDate = GETDATE()
WHERE  OwnerUserId IN (22656, 135201, 29407)
  AND  PostTypeId  = 1;
-- Leave open, mimics a slow transaction
T-SQL · Window B: second writer (overlapping user)
USE StackOverflow;

-- Touches OwnerUserId 22656, overlaps with Window A.
-- Without Optimized Locking: blocks on Window A's entire KEY lock set.
-- With Optimized Locking + LAQ: rows checked individually.
-- Only true conflicts block. Non-overlapping rows proceed immediately.

BEGIN TRANSACTION;
UPDATE dbo.Posts
SET    ViewCount = ViewCount + 100
WHERE  OwnerUserId = 22656
  AND  Score       > 1000;
ROLLBACK;
T-SQL · Window C: live blocking monitor
USE StackOverflow;

SELECT
    s.session_id    AS blocked_spid,
    s.blocking_session_id,
    SUBSTRING(st.text, (r.statement_start_offset/2)+1,
        ((CASE WHEN r.statement_end_offset = -1
               THEN DATALENGTH(st.text)
               ELSE r.statement_end_offset
          END - r.statement_start_offset)/2)+1)  AS current_statement,
    r.wait_type,
    r.wait_time AS wait_ms
FROM   sys.dm_exec_requests  r
JOIN   sys.dm_exec_sessions  s  ON r.session_id = s.session_id
CROSS APPLY
       sys.dm_exec_sql_text(r.sql_handle) st
WHERE  s.blocking_session_id > 0;

-- With Optimized Locking: likely returns 0 rows.
-- Without: LCK_M_X waits with non-trivial wait_ms.

Verifying it's active and monitoring

Trust but verify. Here's your all-in-one health check:

T-SQL · health check
SELECT
    d.name                                                     AS database_name,
    d.compatibility_level,
    d.is_read_committed_snapshot_on                           AS rcsi_enabled,
    d.is_accelerated_database_recovery_on                     AS adr_enabled,
    DATABASEPROPERTYEX(d.name, 'IsOptimizedLockingOn')        AS optimized_locking,
    CASE
        WHEN d.compatibility_level              >= 160
         AND d.is_read_committed_snapshot_on       = 1
         AND d.is_accelerated_database_recovery_on = 1
        THEN 'All prerequisites met'
        ELSE 'Check individual columns above'
    END                                                        AS summary
FROM  sys.databases d
WHERE d.database_id > 4;  -- skip system databases

Keep an eye on the Persistent Version Store too. Since ADR is a prerequisite, long-running transactions can cause PVS to grow:

T-SQL · PVS monitoring
SELECT
    pvss.persistent_version_store_size_kb  / 1024.0 AS pvs_size_mb,
    pvss.online_index_version_store_size_kb / 1024.0 AS online_ix_ver_mb,
    pvss.current_aborted_transaction_count,
    pvss.aborted_version_cleaner_start_time,
    pvss.aborted_version_cleaner_end_time,
    dt.database_transaction_begin_time      AS oldest_active_txn
FROM   sys.dm_tran_persistent_version_store_stats pvss
LEFT JOIN sys.dm_tran_database_transactions dt
    ON  pvss.oldest_active_transaction_id = dt.transaction_id
    AND pvss.database_id = dt.database_id
WHERE pvss.database_id = DB_ID('StackOverflow');

Gotchas and things to know

Isolation level interactions

LAQ only fires under READ COMMITTED (the default) and READ COMMITTED SNAPSHOT. Under SERIALIZABLE or REPEATABLE READ, SQL Server still needs to hold locks to prevent phantom reads, so the LAQ optimization is skipped for those queries. Your serializable hints aren't broken; they just don't get the LAQ benefit.

Snapshot Isolation note If your application uses explicit SET TRANSACTION ISOLATION LEVEL SNAPSHOT (not RCSI, actual SI), LAQ still applies for DML statements in that transaction. Writers still take X locks on qualifying rows, but TID reduces lock memory significantly.

Hints that disable Optimized Locking

These table hints force the old locking behavior back on because they make explicit promises about lock acquisition order or scope:

  • UPDLOCK: explicitly requests update locks, incompatible with TID locking
  • HOLDLOCK: semantically equivalent to SERIALIZABLE, disables LAQ
  • ROWLOCK: explicit row lock request that bypasses the TID mechanism
  • PAGLOCK and TABLOCK: coarser granularity hints that also opt out

Legacy code that defensively scatters WITH (UPDLOCK, ROWLOCK) everywhere will continue to work correctly. It just won't benefit from Optimized Locking. Worth auditing to remove unnecessary hints.

Lock escalation still happens

Optimized Locking doesn't disable lock escalation. If your transaction modifies enough rows to hit the escalation threshold (default: 5,000 locks, or when lock memory exceeds roughly 40% of the buffer pool), the engine still escalates to a table lock. You'll hit that threshold much less often, but it can still happen on truly enormous batch operations.

ADR cleanup and PVS growth

Because ADR stores row versions in the PVS, long-running transactions prevent version cleanup and cause PVS to grow. Monitor sys.dm_tran_persistent_version_store_stats and keep an eye on your longest open transactions. The aborted_version_cleaner fields tell you if cleanup is falling behind.

Deadlock detection changes subtly

With TID locking, some deadlock patterns that existed before will disappear. However, TID-on-TID deadlocks can still occur when writer A is waiting for writer B's TID to commit while writer B is waiting for writer A. The deadlock graph in SQL Server 2025 has been updated to show tidlock resource nodes, so your existing deadlock XE sessions will capture these correctly.

"Optimized Locking is the concurrency upgrade SQL Server has needed for a decade. Enable RCSI, turn on ADR, bump your compat level, and let the engine work for you."

Prerequisites: compat level 160+, RCSI = ON, ADR = ON, user database only
sql-server-2025 locking performance concurrency rcsi adr stack-overflow-db
Next
Next

SQLBits 2026, One Nerd’s Thoughts