Identity Value Exhaustion in SQL Server (and how not to get paged at 2am đ )
If youâve worked with identity columns in Microsoft SQL Server, youâve probably treated them like oxygen: always there, never think about it.
âŚuntil one day inserts start failing with:
Arithmetic overflow error converting IDENTITY to data type int.
Cannot insert explicit value for identity column...
And suddenly your âinfiniteâ IDs are very, very finite.
This is identity value exhaustion, and itâs 100% preventable if you know what to watch for.
Letâs walk through:
What it is
How it happens
How to detect it early
How to fix it safely
How to design so it never happens again
Demo scripts along the way
Casual, practical, copy pasteable. Letâs go.
Quick refresher: whatâs an IDENTITY?
Typical table:
CREATE TABLE dbo.Orders
(
OrderId INT IDENTITY(1,1) PRIMARY KEY,
OrderDate DATETIME2 NOT NULL DEFAULT sysdatetime()
);
IDENTITY(1,1) means:
start at 1
increment by 1
SQL Server auto-generates values
But hereâs the catch:
đ The data type limits the range, not SQL Server magic.
| Type | Max value |
|---|---|
| TINYINT | 255 |
| SMALLINT | 32,767 |
| INT | 2,147,483,647 |
| BIGINT | 9,223,372,036,854,770,000 |
If you hit the max⌠inserts fail. Hard stop. No wrap around.
How identity exhaustion happens in real life
Itâs rarely âwe actually inserted 2 billion rows.â
More common causes:
1. Frequent deletes
Burns numbers faster than you think.
2. Rollbacks
Identity values are not transactional. Theyâre consumed even if the transaction rolls back.
BEGIN TRAN;
INSERT dbo.Orders DEFAULT VALUES;
ROLLBACK;
In this scenario, the identity value would still be consumed.
3. High-volume logging/event tables
Millions per day â exhaustion sneaks up fast.
4. Bad initial design
Using INT when BIGINT shouldâve been obvious.
Classic âwe didnât think this app would still be alive in 10 yearsâ problem.
How to check if youâre in danger
Hereâs a handy script to see identity usage:
Check identity headroom
CREATE TABLE #IdentityExhaustion
(
DatabaseName sysname
,SchemaName sysname
,TableName sysname
,IdentityColumn sysname
,seed_value sql_variant
,increment_value sql_variant
,last_value sql_variant
,max_value sql_variant
,PercentUsed DECIMAL(10,7)
)
EXEC sp_MSforeachdb N'
USE [?]
DECLARE @PercentOfMax BIGINT = ''50''
INSERT INTO #IdentityExhaustion (DatabaseName,SchemaName,TableName,IdentityColumn,seed_value,increment_value,last_value,max_value,PercentUsed)
SELECT
DB_NAME() AS DatabaseName,
s.name AS SchemaName,
t.name AS TableName,
c.name AS IdentityColumn,
ic.seed_value,
ic.increment_value,
ic.last_value,
CASE ty.name
WHEN ''tinyint'' THEN 255
WHEN ''smallint'' THEN 32767
WHEN ''int'' THEN 2147483647
WHEN ''bigint'' THEN 9223372036854775807
END AS MaxValue,
100.0 * CONVERT(bigint,ic.last_value) /
CASE ty.name
WHEN ''tinyint'' THEN 255
WHEN ''smallint'' THEN 32767
WHEN ''int'' THEN 2147483647
WHEN ''bigint'' THEN 9223372036854775807
END AS PercentUsed
FROM sys.identity_columns ic
JOIN sys.tables t ON ic.object_id = t.object_id
JOIN sys.schemas s ON t.schema_id = s.schema_id
JOIN sys.columns c ON ic.object_id = c.object_id AND ic.column_id = c.column_id
JOIN sys.types ty ON c.user_type_id = ty.user_type_id
WHERE (ty.name = ''tinyint'' AND (CONVERT(bigint,ic.last_value)) >= (255 * (@PercentOfMax/100.0)))
OR (ty.name = ''smallint'' AND (CONVERT(bigint,ic.last_value)) >= (32767 * (@PercentOfMax/100.0)))
OR (ty.name = ''int'' AND (CONVERT(bigint,ic.last_value)) >= (2147483647 * (@PercentOfMax/100.0)))
OR (ty.name = ''bigint'' AND (CONVERT(bigint,ic.last_value)) >= (9223372036854775807 * (@PercentOfMax/100.0)))'
SELECT *
FROM #IdentityExhaustion
ORDER BY PercentUsed DESC
DROP TABLE #IdentityExhaustion
Fix options (from âquick patchâ to âgrown-up solutionâ)
Option 1 â Change to BIGINT (best long-term fix)
This is the adult answer.
For larger tables, you donât want to modify this data type in-place, itâll take a while and itâs a great way to blow out your transaction log. The better approach is to create a new copy of the table with the data types youâre after and switch it with your live table.
Steps:
1. Create a copy of your table
Youâll want an exact copy of your table structure including any indexes and constraints with the priority being that you need to update the data type of your identity column. Name this table something simple, like TableName_New.
Itâs worth noting, you will have to add a suffix to all of your indexes and constraints too, weâll rename these after.
2. Copy Data
Weâll then want to create a script that will copy our data from the original table to the new one.
Weâre going to be inserting data into an identity column, so you will need to enable identity insert
SET IDENTITY_INSERT [TableName] ON
How large your batches are is completely up to you, but that will depend upon the hardware youâre working with and your specific workloads.
For reference, I recently moved a table with 2 billion rows by doing batches of 1 million, that took about 19 hours to complete.
3. Insert, Update, Delete Trigger
As youâre copying your data youâre going to want to keep it up to date too.
Write yourself a trigger that will send any data changes to your new table, this will save you a lot of pain and time at cutover.
4. Consider Foreign Keys
Use sp_fkeys on your existing table, if there are any foreign keys pointing towards it youâre going to want to drop those and recreate after the cutover.
5. sp_rename
Once all of your data is transferred over and the trigger is keeping it maintained, weâre good to switch our tables.
Use sp_rename to change your live table name to add a suffix, something like â_OLDâ
Use sp_rename again to remove the suffix from your â_Newâ table
You should now have your table with the bigint value in place of your original table and with the same data.
6. Cleanup
Go ahead and turn identity insert off again, we donât want that hanging around causing problems.
SET IDENTITY_INSERT [TableName] OFF
Weâll also want to re-apply those foreign keys (and make sure we run the check command on them)
Once weâre happy that the switch has been successful and the new table contains the data we need, weâre golden.
You can then delete your _OLD table whenever you feel confident to do so.
Option 2 â Truncate Table
This is only an option if you donât care about the data inside your table.
Running a truncate command on the table will deallocate all data pages associated to that table, it nukes it, you will lose all data.
However, what it also does is reset your IDENTITY column to its original seed which immediately solves our problem.
This is extremely useful for tables that are used in ETL processes or otherwise for logging data that we donât need to keep long term.
Very situational, but a solution to this problem.
Option 3 â Reseed
Only works if you still have space left.
DBCC CHECKIDENT('dbo.Orders', RESEED, 1000000);
Useful when:
you bulk deleted rows
you want to jump ahead
BUT:
Does NOT increase max range.
Option 3 â Negative reseed trick (emergency hack)
If youâre desperate:
DBCC CHECKIDENT ('dbo.Orders', RESEED, -1);
DBCC CHECKIDENT('dbo.Orders', RESEED, -1);
Now it goes negative:
-1, -2, -3...
Buys time.
But:
ugly
confusing
breaks assumptions
tech debt city
Use only if prod is on fire.
That should give you a few options on how to get out of trouble.
If youâre in this hole and in over your head, feel free to contact us for an assist