Part Two – Commands and Mechanisms
An SQL text by Erland Sommarskog, SQL Server MVP. Latest revision: 2020-12-06.
Copyright applies to this text. See here for font conventions used in this article.
This article is the second in a series of three about error and transaction handling in SQL Server. The purpose of the first article was to give you a jumpstart on error handling without going into the very many details of error handling in SQL Server. On the other hand, the focus of this part is exactly that: a detailed description of the commands for error and transaction handling and what can happen when an error occurs during the execution of a stored procedure or similar. In Part Three, I use this knowledge as a base for a discussion on how we should implement error handling in SQL Server in different situations. In addition to the three main parts, there are three appendixes that discuss special topics: linked servers, the CLR and Service Broker respectively.
Part One was intended for everyone, including the very unexperienced reader. In contrast, Parts Two and Three and the appendixes assume that you have some general programming experience, although necessarily not with SQL Server. Parts Two and Three are also considerably longer. If you arrived directly to this article from elsewhere on the web, I recommend that you read Part One first, since the examples in this article make use of error_handler_sp which I introduce in the jumpstart article.
In Part One, we learnt that SQL Server has TRY-CATCH, and given how things were in SQL 2000, TRY-CATCH is a great improvement. However, once you peel below the surface you find a confusing world of inconsistencies and limitations, which becomes even more bewildering when you look at what happens if the error occurs inside a transaction. Every once in a while, I get the feeling that SQL Server is intentionally designed to be as confusing as possible. When they plan for a new release they ask each other what can we do this time to confuse the users? Sometimes they run a little out of ideas, but then someone says Let's do something with error handling! On a more serious note, there is a lot of legacy. What Microsoft inherited from Sybase was nothing Sybase should be proud of. But Microsoft cannot only blame Sybase; rather they have made the mess worse and worse through the years. And changing the behaviour to be more consistent is difficult, because there is a distinct risk that it would break existing applications.
This article aims at guiding the reader through this maze. In the first chapter after this introduction I look at the components of an error message. In chapter three, I cover the basics for transactions in SQL Server and in chapter four I cover statements that one way or another are related to error handling. Then we come to chapter five which is the centrepiece of this article: I attempt to classify errors from the different actions SQL Server may take in case of an error. In chapter six I discuss special contexts where the rules are altered one way or another. The article closes with three shorter chapters: one on additional commands for error and transaction control, one that covers transaction and error handling in natively compiled stored procedures in SQL 2014 and finally there is a chapter about various odd situations that fall outside of the other patterns.
A word of caution: don't feel bad if your head starts spinning as you work your way through this confusing subject. And if you at some point runs for the door screaming Oracle, come back, everything is forgiven, I can't hardly blame you.
Table of Contents
Index of All Error-Handling Articles
The Anatomy of an Error Message
More Details on the Severity Levels
Transaction Basics in SQL Server
Nested Transactions and @@trancount
Error-related Statements and Functions
RETURN and the Return Values from Stored Procedures
Batch Abortion without Rollback
BULK INSERT and OPENROWSET (BULK)
Additional Error and Transaction Control
Having XACT_ABORT ON by Default?
Additional Syntax for BEGIN/COMMIT/ROLLBACK TRANSACTION
Handling Duplicate Keys in Indexes
Natively Compiled Stored Procedures
Nesting Natively Compiled Stored Procedures
Nesting Hekaton Procedures in Outer Transactions
Catching a Misisng Table in the Same Scope
Here follows a list of all articles in this series:
Part One – Jumpstart Error Handling.
Part Two – Commands and Mechanisms. (This article)
Appendix 1 – Linked Servers. (Extends Part Two.)
Appendix 2 – CLR. (Extends both Parts Two and Three.)
Appendix 3 – Service Broker. (Extends Part Three.)
All the articles above are for SQL 2005 and later. For those who still are on SQL 2000, there are two older articles:
Error Handling in SQL Server 2000 – a Background.
Implementing Error Handling with Stored Procedures in SQL 2000.
I've decided to dedicate this suite of articles to the late Frank Zappa, an extraordinaire composer, guitarist, lyrics writer and band leader who left us far too early. These articles and appendixes have many code examples, and rather than using the common sample names like foo, bar, nisse, tbl etc, I have in many examples used titles and lines from Frank Zappa's vast production. If you are a Zappa fan, you may appreciate it. And if you are not, well, at least you know where those funny names are coming from.
A message returned from SQL Server through the TDS protocol contains seven components. Here is an example of such a message as it appears in SQL Server Management Studio (SSMS):
Msg 547, Level 16, State 0, Procedure delete_order, Line 2
The DELETE statement conflicted with the REFERENCE constraint "FK_Order_Details_Orders". The conflict occurred in database "Northwind", table "dbo.Order Details", column 'OrderID'.
Server – The server the error originated from. SSMS does not display this item, but other tools such as SQLCMD do.
Error number – The error number for the message, 547 in this example. Error numbers raised by SQL Server are in the range 1 to 49999, and you can find these error numbers in the catalog view sys.messages. You can define your own errors, with numbers starting from 50001. I will return to this, when I cover the RAISERROR statement. There are also some "false" error numbers that I discuss below.
Severity level – This value, 16 in this example, indicates how serious the error is in its own peculiar way. Here is the short story: levels 0-9 are purely informational messages. Level 10 never occurs. Levels 11-16 are regular SQL errors like constraint violations, permission errors etc. Level 17-25 are various resource and corruption errors. If the severity is ≥ 20, the connection is terminated. I go into a little more detail in the following section.
State – A value between 0 and 255 that may give more information about the underlying cause for the error. An important example is message 18456, Login failed for user '%s'. In the message displayed to the user, the state is always 1, but in the SQL Server error log you can find other state values, which explains why the login failed. SQL Server MVP Aaron Bertrand has fairly elaborate blog post on these values. There is also a topic in Books Online that discusses the state numbers for this message. However, for most other SQL Server errors, state numbers are usually not documented, so it is not that often you have use for them. But as we shall see later, you can use them for your own error messages as well.
Procedure – If the error occurred in a stored procedure, trigger or user-defined function, the name appears here. If the error is from an ad-hoc batch or from dynamic SQL, this component is not included in the error message. It can pay off to give attention to the procedure name. When running an INSERT or an UPDATE statement you may get an error message which does not seem to fit at all. But if you look at the procedure name, you may find that the error message comes from a trigger.
Line – On which line in the procedure or batch the error occurred. If the line number is 0, this indicates that something went wrong in the call to the procedure (or when the procedure exited). The line number typically points to the line where the failing statement starts. For a compilation error, it can be a line within the statement where SQL Server thinks you went wrong. Beware that recent versions of SSMS may modify the line number, see the section Line Numbers in SSMS below.
Message text – The actual text of the error message. That is, the text in red starting with The DELETE statement... in the example above. The text The statement has been terminated is a separate message, as discussed below.
The severity level is a somewhat confusing matter. The topic Database Engine Error Severities in Books Online attempts to document the severity levels, but I'm not sure things are as exact as Books Online would have it. Here is my summary of what I think is relevant about severity levels:
0 | An informational message. Query tools such as SSMS typically only print the message text itself, but leave out all the other components. One such example is seen above: The sentence The statement has been terminated appears in black, because it is a message of its own, message 3621, which for some reason accompanies a number of other error messages. |
1-9 | These levels, too, are informational. But in difference to level 0, SSMS prints out the message number, severity level and state for such messages. (Procedure and line are still withheld.) As far as I know, SQL Server never produces these levels on its own, but you can produce them yourself with RAISERROR. The SQL Server Agent dialog New Alert in SSMS has a drop-down which gives explanations to some of these severity levels, but I don't think they are very accurate. |
10 | Here is an odd one: this level can never occur. Or more precisely, if a message with level 10 is raised, the level is for "compatibility reasons" converted to 0, and thus SSMS and other tools will only print the text. |
11-16 | These are normal execution errors, or as Books Online puts it: errors that can be corrected by the user. In this range it's not that the higher the level, the more critical the error, but rather it is some general classification. The most critical error in this group, in my opinion, is deadlock, and that is level 13. Most regular run-time errors are level 16, while syntax errors tend to be level 15 and permission errors are usually level 14. It is certainly not entirely consistent. For instance, error 2627, violation of a PK constraint, is level 14 despite that it has nothing to do with permissions. |
17-19 | Level 17-19 are various sorts of resource errors like running out disk space or memory. |
20-25 | These levels reflect various internal errors in SQL Server and protocol errors in client communication, as well as error messages about corruption due to hardware malfunction. An error with severity ≥ 20 causes SQL Server to terminate the connection. |
Messages with severity from 11 to 19 are trappable with TRY-CATCH, whereas messages with severity 0 to 9 are always passed to the client, and you cannot access them in SQL Server. Nor can you trap errors with severity ≥ 20, since when this happens you are no longer alive.
There are some "false" error messages. That is, messages that do not come from SQL Server, but where the API or the tool still present them in that format. One you may see in SSMS is this:
Msg 0, Level 11, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
The level for this error is either 11 or 20. Sometimes you get two messages, one with each level. The error indicates that there was an internal error somewhere, be that in SQL Server or in SSMS (or more precisely in SqlClient). When you see this message, the connection has typically been closed. My theory is that internal errors in SQL Server result in level 20, while an exception in SSMS/SqlClient only produces a level 11 message, but this is nothing I have been able to confirm. I have found when working with this series of articles that even if the exception occurs in the client API, SQL Server may still be the culprit by sending incorrect TDS packets across the wire.
You may also see errors without a header line. Here is one example that occurred with older versions of SQL Server Management Studio:
DECLARE @s sql_variant = sysdatetimeoffset() SELECT @s
When I run this in SSMS 2014 or older versions, I see this:
(1 row(s) affected)
An error occurred while executing batch. Error message is: Input string was not in a correct format.
This is a bug in SqlClient which is not able to cope with datetimeoffset data in an sql_variant value. You don't get this particular error in SQLCMD or in the old Query Analyzer. Nor does it appear in SSMS 17.9.1, the current version as I updated this section.
These errors are not particularly common, but it is good to be able to recognise them when you encounter them, so you don't waste your time with trying to understand what is wrong with your SQL. When it happens, try SQLCMD or OSQL instead and see how they react. If they too are spooky, the issue is down in SQL Server, but if they play well the issue in is SSMS/SqlClient.
Up to SQL 2012, SQL Server Management Studio reported the line number from SQL Server as-is, but starting with SSMS 2014, SSMS now modifies the line number so that refers to a the line number in the query window. Unfortunately, this lead to a major deficiency in SSMS 2014 which has been addressed in SSMS 2016.
Note: "SSMS 2016" is a somewhat inaccurate moniker. Starting with SQL 2016, SSMS is no longer part of the SQL Server distribution as such, but it leads its own life with monthly updates. SSMS is now a free download and you can run SSMS against all versions from SQL 2000 and up (save for connection to older versions of Integration Services.)
To see this difference, consider this script:
PRINT 'This is the first batch' go PRINT 'This is the second batch' go PRINT 'This is a the third batch' 'and it has a syntax error'
In all query tools up to SSMS 2012, the output is:
This is the first batch
This is the second batch
Msg 102, Level 15, State 1, Line 1
Incorrect syntax near 'and it has a syntax error'.
The error is reported to be on line 1, because that is how SQL Server sees the script. (Recall that go
is a batch separator that only has a meaning for the query tool, and SQL Server never sees it.) Starting with SSMS 2014, the output is:
This is the first batch
This is the second batch
Msg 102, Level 15, State 1, Line 5
Incorrect syntax near 'and it has a syntax error'.
The line number is now reported with regards to the script, which certainly can be helpful at times. (But since you always can double-click an error message to find where the error is, I don't find this extremely important.)
In their enthusiasm, Microsoft overlooked the possibility that the error may come from a stored procedure. Consider this script:
CREATE TABLE Bobby (Brown tinyint NOT NULL) go CREATE PROCEDURE CityOfTinyLites @a int AS INSERT Bobby(Brown) VALUES (@a) go EXEC CityOfTinyLites 800
The output in SSMS 2014 is:
Msg 220, Level 16, State 2, Procedure CityOfTinyLites, Line 7
Arithmetic overflow error for data type tinyint, value = 800.
This is plain wrong, since there is no line 7 in that procedure. With SSMS 2014, to understand where an error occurred inside a stored procedure, you need to know on which line the batch that includes the EXEC statement starts and then deduct that line number from the line number in the error message and add one to get the line number in the stored procedure. You cannot say that this is very convenient. Thankfully, in SSMS 2016 addresses this, and you get this output:
Msg 220, Level 16, State 2, Procedure CityOfTinyLites, Line 2 [Batch Start Line 5]
Arithmetic overflow error for data type tinyint, value = 800.
The statement has been terminated.
That is, when there is a procedure name in the error message, SSMS 2016 reports the the unmodified line number in the procedure, and it also adds information about where in the window the batch that includes the procedure call started.
Note that there is still is a problem if the error occurs in a batch of dynamic SQL. Consider this silly script:
CREATE PROCEDURE JonesCrusher AS EXEC('Rat Tomago') go -- This is line 4 and there is a comment. EXEC JonesCrusher
The output with SSMS 2014 and later is:
Msg 2812, Level 16, State 62, Line 4
Could not find stored procedure 'Rat'.
Because there is no way for SSMS to tell whether the error occurred in a statement in the query window or in a batch of dynamic SQL (SQL Server does not provide this information), it assumes the former and reports the error message to be on the wrong line. To be fair, error messages from dynamic SQL that occurs in a procedure three levels deep have always been problematic, because you don't have any context.
I like to point out that what I have said here applies to SSMS only. SQLCMD and other command-line tools do not modify the line number.
Note: With the exception of this section, the output for all examples in this series of articles has been produced with SSMS 2012. Thus, if you are using a later version of SSMS, you may see differences with regards to the line number.
An important property in database systems is the ACID principle. Transactions should be
Atomic – A transaction should either be performed in whole or not at all.
Consistent – A transaction should take the database from one consistent state to another with regards to constraints and other integrity checks like triggers.
Isolated – The results of an on-going transaction should not be visible to other transactions, and depending on isolation level, the transaction should not see changes performed by other transactions.
Durable – Once completed, the result of the transaction should be persisted and survive server crashes.
A full discussion how all these four properties are implemented in SQL Server is far beyond the scope of this article. Our focus is error handling, but as part of error handling we must understand how we can uphold the ACID principle in case of an error. The most important property is the first one, atomicity. That is, in the very most cases our error handling should make sure that in case of an error the transaction performs nothing at all.
Note: For a longer discussion about ACID as such, see this article in Wikipedia.
In this chapter, I will look the basic concepts and commands for transaction handling in SQL Server. I'm saving more exotic variations to a later chapter.
In an ANSI-compliant database, the normal procedure is to start a transaction, perform one or more updates and then commit the transaction whereupon it becomes durable. If you perform an operation and no explicit transaction has been started, the database engine will start an implicit transaction that you need to commit explicitly.
This is not the normal procedure in SQL Server. Instead, SQL Server employs auto-commit. That is, if you perform a modifying operation, SQL Server starts a system transaction which it commits when the statement completes successfully. This applies to DML statements – INSERT, UPDATE, DELETE and MERGE – as well as DDL statements such as CREATE TABLE etc. For instance, if you attempt to insert 250 rows into a table in a single statement, and one of these rows violates a constraint, the end result is that no rows are inserted. And likewise if you try to create a table, but some part of the CREATE TABLE statement is not valid, for instance there is a FOREIGN KEY constraint referencing a non-existing table, the table will not be created.
A system transaction can encompass more operations than just the statement itself. One example is an update performed through a cascading constraint. More important for you as a programmer are triggers – triggers always execute in the context of the statement that fired the transaction. This is very logical, since the purpose of a trigger is to uphold the C in ACID, consistency. That is, the trigger is supposed perform validations or cascading updates that cannot be implemented by constraints. A third example is a stored procedure invoked through INSERT-EXEC; it executes in the context of a transaction defined by the INSERT statement.
Here I have talked about modifying operations, but what about SELECT? Yes, SELECT statements start system transactions too, so that SQL Server can uphold the Isolation property of ACID. But these are invisible to you, so this is nothing that we need to occupy us with in this article. However, the mention of SELECT leads us to SELECT INTO which is a poor name for CREATE TABLE + INSERT in a single statement. SELECT INTO is a little special in the context of auto-commit: it consists of two system transactions. The first transaction creates the table, while the second transaction populates it. If the second part fails, the table will still be there, although it is empty. (There is an exception that I became aware of only recently. If the FROM clause of the SELECT INTO statement includes a table on a linked server, SELECT INTO is a single transaction.)
A user-defined transaction is explicitly started with BEGIN TRANSACTION and committed with COMMIT TRANSACTION. You can nest BEGIN and COMMIT; however, this does not define any inner transactions but only increments and decrements a counter, which we will look at in the next section. For simplicity, I am overlooking transaction-nesting in this section.
As long as the transaction is open, the updates performed by the transaction are invisible to other processes, unless they use the isolation level READ UNCOMMITTED (a.k.a. NOLOCK). Readers that use other isolation levels that attempt to access the modified rows will be blocked, or, if they use some form of snapshot isolation, see an older version of the row.
When you issue the command COMMIT TRANSACTION and this command has completed, the transaction is now durable. (Physically, the pages on disk may not yet have the changes, but there is enough information in the transaction log for SQL Server to roll forward the transaction after a crash. The details about this are beyond the scope for this article.) All other processes can now see the changes without using NOLOCK.
Just like a system transaction, a user-defined transaction can be rolled back. This can happen of three different reasons:
If a transaction is rolled back, SQL Server undoes all changes performed in the transaction. This includes DML statements like INSERT, UPDATE and DELETE, as well as metadata operations like creating tables, granting permissions etc. However, it does not include changes to variables, neither regular scalar variables nor table variables. Here is an example:
SET NOCOUNT ON CREATE TABLE Hot (Rats int NOT NULL) CREATE TABLE #Willie (ThePimp int NOT NULL) DECLARE @Peaches TABLE (EnRegalia int NOT NULL) DECLARE @HotZitz int BEGIN TRANSACTION CREATE TABLE MrGreenGenes (SonOf int NOT NULL) INSERT Hot(Rats) VALUES (6) INSERT #Willie(ThePimp) VALUES (2) INSERT @Peaches (EnRegalia) VALUES (1) SELECT @HotZitz = 9645 ROLLBACK TRANSACTION SELECT Rats FROM Hot SELECT ThePimp FROM #Willie SELECT EnRegalia FROM @Peaches SELECT @HotZitz AS [@HotZitz] SELECT SonOf FROM MrGreenGenes
The output is:
Rats
-----------
ThePimp
-----------
EnRegalia
-----------
1
@HotZitz
-----------
9645
Msg 208, Level 16, State 1, Line 19
Invalid object name 'MrGreenGenes'.
The data inserted into the permanent table Hot is missing after the rollback. Likewise, the two tables #Willie and MrGreenGenes created inside the transaction are no longer there after the rollback. On the other hand, the row inserted into the table variable @Peaches is still there and the variable @HotZitz retains the value 9645. Since variables are local to a process and not visible to anyone else, this is not a serious breach against the principle of atomicity. While the behaviour for table variables may be somewhat unexpected, there are situations where this can be useful, as it permits you to collect data for debug or logging purposes that will survive if the transaction is rolled back. Note that when it comes to individual statements, they are atomic also for variables. That is, an INSERT statement that attempts to insert 100 rows into a table variable will insert 100 rows or zero rows, but never 21 or 56 rows.
To my knowing, there is one single operation in SQL Server that is unaffected by rollbacks and that is update of statistics, both auto-update or explicitly performed with UPDATE STATISTICS, which commits independently. Sometimes you may wish to perform an operation outside the transaction and commit it directly, just like UPDATE STATISTICS does. There is nothing built-in for this in SQL Server, but you can emulate this with a loopback connection, something I make use of in SqlEventLog that I present in Part Three.
What happens if an error occurs during a transaction? As I said, according to ACID, transactions should be atomic. That is, they should either be committed in entirety or be rolled back in entirety. It should never happen that the transaction is persisted to 30 % or whatever. That is the theory. SQL Server violates this principle, and permits you do exactly this. To avoid that this happens, you need to implement your error handling properly. What you can rely on is that even if you are in a user-defined transaction, a single statement is always carried out entirely or rolled back. Look at this example:
SET NOCOUNT ON SET XACT_ABORT OFF CREATE TABLE LittleUmbrellas (a int NOT NULL CHECK (a > 0)) CREATE TABLE GumboVariations (a int NOT NULL CHECK (a > 0)) CREATE TABLE ItMustBeACamel (a int NOT NULL CHECK (a > 0)) BEGIN TRANSACTION INSERT LittleUmbrellas (a) SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 INSERT GumboVariations (a) SELECT 11 UNION ALL SELECT -12 UNION ALL SELECT 13 INSERT ItMustBeACamel (a) SELECT 21 UNION ALL SELECT 22 UNION ALL SELECT 23 COMMIT TRANSACTION go SELECT a FROM LittleUmbrellas SELECT a FROM GumboVariations SELECT a FROM ItMustBeACamel go DROP TABLE LittleUmbrellas, GumboVariations, ItMustBeACamel
You may note that the values inserted into the first and the last table are all legal, whereas the data for GumboVariations includes a row that violates the constraint. Thus, this script will produce an error, and if we believe in the gospel of transactions, we expect all three tables to be empty after the transaction. However, this is the output:
Msg 547, Level 16, State 0, Line 11
The INSERT statement conflicted with the CHECK constraint "CK__GumboVariatio__a__31EC6D26". The conflict occurred in database "tempdb", table "dbo.GumboVariations", column 'a'.
The statement has been terminated.
a
-----------
1
2
3
a
-----------
a
-----------
21
22
23
The transaction was performed to 67 %! Atomicity is applied to the individual statements, but not to the entire transaction. An error occurred, but SQL Server did not roll back the transaction, instead it continued with the next statement as if nothing had happened. This certainly is a gotcha that confuses many new users of SQL Server, and questions about this reappear every now and then in SQL Server forums. As one example, the reader may be amused by peeking at this very old thread from comp.databases.sybase.
If you change SET XACT_ABORT OFF (which is the default setting), to SET XACT_ABORT ON, you will find that all tables are empty after the error, and this is why I recommend you to always run with SET XACT_ABORT ON. With this setting, most errors cause the transaction to be rolled back. Exactly when it does not happen, we will look at the in the chapter Classification of Errors.
There are a few commands that cannot be performed inside a user-defined transaction, typically because they perform operations in the file system that are not transactional: ALTER DATABASE, ALTER FULLTEXT CATALOG, ALTER FULLTEXT INDEX, BACKUP, CREATE DATABASE, CREATE FULLTEXT CATALOG, CREATE FULLTEXT INDEX, DROP DATABASE, DROP FULLTEXT CATALOG, DROP FULLTEXT INDEX, KILL, RECONFIGURE and RESTORE. Some DBCC commands also belong to this category. In SQL 2014, DDL statements related to memory-optimised tables and natively compiled stored procedures also fall into this group.
If a transaction is already in progress and you issue BEGIN TRANSACTION, SQL Server increments a transaction counter, exposed by the function @@trancount. (Originally, functions starting with @@ were known as "global variables", whence the syntax.) At this point if you issue COMMIT TRANSACTION, @@trancount is decremented, but that it all that happens. That is, COMMIT TRANSACTION only has a real effect, if @@trancount is 1 prior to the execution of the statement. There is certainly an advantage with this arrangement. Say that you have a stored procedure that starts a transaction and then calls one or more other stored procedures. They in their turn also issue BEGIN TRANSACTION, so that when they are called directly, they can perform a piece of atomic work. When they are called as part of a greater plot, their local BEGIN and COMMIT TRANSACTION have no real effect, but thanks to transaction nesting in this sense, you don't need to have any conditional logic to make sure that you only issue BEGIN and COMMIT TRANSACTION if there is no active transaction.
With ROLLBACK TRANSACTION it is another matter. No matter the value of @@trancount, ROLLBACK always pulls the rug for the transaction and rolls back it all. Exactly what implications this has for our error and transaction handling, we ignore for now, but we will return to it later.
@@trancount does not only count user-defined transactions, but also system transactions. When you are in a trigger, @@trancount will always be at least 1, even if you never issued BEGIN TRANSACTION. If there was an active user transaction active when the trigger was fired, @@trancount will still be 1, not 2:
CREATE TABLE AybeSea (a int NOT NULL) go CREATE TRIGGER AybeSea_tri ON AybeSea AFTER INSERT AS SELECT @@trancount AS [@@trancount] go INSERT AybeSea (a) VALUES (1330) go BEGIN TRANSACTION INSERT AybeSea (a) VALUES (1658) COMMIT TRANSACTION
The output is:
@@trancount
-----------
1
@@trancount
-----------
1
The same thing is true if you are inside a stored procedure invoked through INSERT-EXEC.
Curiously, there is one case where @@trancount can increase from 0 to 2. If you do things like INSERT tbl(a) VALUES(@@trancount)
,
the inserted value will always be ≥ 2.
When you exit a stored procedure, and @@trancount has a different value from when the procedure started executing, SQL Server raises error 266: Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = %d, current count = %d. This error is not raised if the procedure is called from a trigger. (Please don't ask why!)
TransactionScope is a class in .NET that is popular with developers. Other environments may have similar classes.
Using this class, you can start a transaction directly from the client. (Well, typically that would be the data layer, but since I am a database guy, I tend to consider anything outside SQL Server as the "client", even if there is a multi-tier architecture.) I am not going to spend a lot of detail on TransactionScope, as I have not worked much with it myself, and I am largely silent on it in this series of articles. My assumption is that transactions started through TransactionScope largely works as transaction started with BEGIN TRANSACTION, but I need to add the caveat that I have not tested each and every case.
There are a few things I like to highlight:
In this chapter I will discuss various features in T‑SQL related to error handling.
I introduced TRY-CATCH in Part One, but there are a few more things to say.
TRY-CATCH catches errors. More precisely this means, that TRY-CATCH catches messages with a severity between 11 and 19. Messages with lower severity are not considered to be errors. And since messages with a severity ≥ 20 terminate the connection, they too are uncatchable. And then there is the big gotcha that I highlighted already in Part One: errors due to deferred name resolution cannot be caught in the scope they occur.
If there are informational messages emitted together with the error, such as the message The statement has been terminated that we saw earlier, and the error is caught by TRY-CATCH, the informational message is dropped on the floor entirely. For the rather superfluous message The statement has been terminated this is not a big deal, but there may be other situations where this can be problematic.
How does TRY-CATCH work with transactions? If you have a user-defined transaction in progress when an error is raised, it will still be active when you enter the CATCH block. However, the transaction may be doomed. If the transaction is doomed, there is only one thing you can do with it: roll it back. When a transaction is doomed, you cannot write to any tables or perform any other operation that requires logging. If you exit the CATCH block without rolling back a doomed transaction, SQL Server raises message 3998, Uncommittable transaction is detected at the end of the batch. The transaction is rolled back. If the transaction is not doomed, you can continue to work as you like, try alternate strategies or whatever and later commit.
What causes a transaction to be doomed or not is a very messy area, that I will come back to in the chapter Classification of Errors. But to not give you false hopes: if you follow the recommendation and run with SET XACT_ABORT ON, the transaction will always be doomed.
There is one thing to observe with OUTPUT parameters. Say that in an outer procedure you are inside a TRY block and you call an inner procedure that has an output parameter. This inner procedure first assigns a value to the output parameter and later it raises an error that transfers execution to the CATCH block in the outer procedure. If the outer procedure now accesses the variable passed for the output parameter, the outer procedure will not see the value assigned in the inner procedure. The semantics for OUTPUT in SQL Server is copy-in/copy-out, and since execution is transferred to the caller's CATCH block directly, the copy-out part never happens. Here is an example to illustrate:
CREATE PROCEDURE BigSwifty @param int OUTPUT AS SELECT @param = 4711 RAISERROR('Out!', 16, 1) go CREATE PROCEDURE WakaJawaka AS DECLARE @a int = 845 EXEC BigSwifty @a OUTPUT SELECT @a AS [@a in WakaJawaka] go CREATE PROCEDURE YourMouth AS DECLARE @a int = 845 BEGIN TRY EXEC BigSwifty @a OUTPUT END TRY BEGIN CATCH PRINT error_message() SELECT @a AS [@a in YourMouth] END CATCH go EXEC WakaJawaka EXEC YourMouth go DROP PROCEDURE BigSwifty, WakaJawaka, YourMouth
The output is:
Msg 50000, Level 16, State 1, Procedure BigSwifty, Line 3
Out!
@a in WakaJawaka
----------------
4711
Out!
@a in YourMouth
---------------
845
When we call WakaJawaka, @param is set to 4711 in BigSwifty. Despite the RAISERROR statement, BigSwifty exits the normal way, since there is no CATCH block in WakaJawaka and 4711 is copied out to @a. In contrast, YourMouth has a CATCH block and when it calls BigSwifty, the RAISERROR statement immediately transfers execution to the CATCH block in YourMouth. The value in @param is never copied back to @a, which retains the value of 845.
If you are in a CATCH block, you may want to know whether your transaction is doomed (i.e., whether it is committable or not). For this purpose, SQL Server provides the system function xact_state() which may return -1, 0 or 1 with the following meanings:
0 | @@trancount = 0 and there is no open transaction. |
1 | @@trancount > 0 and the transaction is committable. |
-1 | @@trancount > 0 and the transaction is doomed, that is not committable. |
This seems simple enough, and you can find code samples out there that looks at xact_state() only and ignores @@trancount. This works in SQL 2012 and up, but there is a very important caveat if you are on SQL 2005 or SQL 2008. On these versions, xact_state() is only non-zero in user-defined transactions, that is, transactions started by BEGIN TRANSACTION. But if the transaction is a system transaction – which is the case in a trigger or with INSERT-EXEC – xact_state() is always 0, which means that in the general case you cannot tell whether your transaction is doomed or not. This example illustrates:
CREATE TABLE AmericaDrinks (a varchar(12) NOT NULL) go CREATE TRIGGER GoesHome ON AmericaDrinks FOR INSERT AS SET XACT_ABORT, NOCOUNT ON SELECT xact_state() AS xact_state1 BEGIN TRY DECLARE @d int SELECT @d = a FROM inserted END TRY BEGIN CATCH SELECT xact_state() AS xact_state2 END CATCH go INSERT AmericaDrinks(a) VALUES('a') go DROP TABLE AmericaDrinks
This immensely silly trigger produces an error that dooms the transaction, yet the output on SQL 2005 and SQL 2008 is:
xact_state1 ----------- 0 xact_state2 ----------- 0
(There is also an error message that I ignore here.) When I discovered this a couple of years ago, I filed a bug and lo and behold, Microsoft fixed this, so that in SQL 2012 and later, the output is:
xact_state1 ----------- 1 xact_state2 ----------- -1
Don't be lured from the example to think that this only applies to triggers. There is no way you can tell whether your procedure is called from a trigger or as part of INSERT-EXEC, so this issue exists with all code on SQL 2005/2008. That does not mean to say that you cannot use xact_state() at all on these versions, but you cannot rely on it alone, but you must also check @@trancount. Then again, when you run with XACT_ABORT ON, the transaction is always doomed, so hoping that xact_state() will be 1 when you are in CATCH handler is futile.
There are six system functions that return one each of the seven components in an SQL Server error message: error_message(), error_number(), error_severity(), error_state(), error_procedure() and error_line(). There is no function to get the server name. You have already seen them in action in Part One when I presented error_handler_sp.
You can always call these functions, but if there is no CATCH handler on the stack, they return NULL. If there are multiple CATCH handlers active, they return information about the innermost error as illustrated by this corny example:
CREATE PROCEDURE ImNotSatisfied AS BEGIN TRY PRINT 'Outer error: ' + error_procedure() + ' - ' + error_message() PRINT 1/0 END TRY BEGIN CATCH PRINT 'Inner error: ' + error_procedure() + ' - ' + error_message() END CATCH PRINT 'Outer error: ' + error_procedure() + ' - ' + error_message() go CREATE PROCEDURE BrainPolice AS SET NOCOUNT ON BEGIN TRY PRINT convert(int, 'ABC') END TRY BEGIN CATCH EXEC ImNotSatisfied END CATCH go EXEC BrainPolice go DROP PROCEDURE ImNotSatisfied, BrainPolice
The output is:
Outer error: BrainPolice - Conversion failed when converting the varchar value 'ABC' to data type int.
Inner error: ImNotSatisfied - Divide by zero error encountered.
Outer error: BrainPolice - Conversion failed when converting the varchar value 'ABC' to data type int.
I can't recall that I ever have had any practical use for this stacking behaviour of the functions, though. What definitely is a limitation is that these functions are scalar. That is, you can only get information about one error message, even if two were raised by the same statement. I have a feedback item that calls for rectificiation of this: Make it possible to retrieve all error messages in a CATCH handler. Votes are welcome!
With the RAISERROR statement you can raise your own errors. Here is one example:
IF NOT EXISTS (SELECT * FROM orders WHERE orderid = @orderid) BEGIN RAISERROR('No such order %d.', 16, 1, @orderid) RETURN 1 END
RAISERROR takes three mandatory parameters: the message text, the severity level and the state. You can use parameter markers in the string, %d
in the example. You provide values for these markers in additional parameters to RAISERROR, starting with the fourth. In this example this is @orderid.
The most
commonly used parameter markers with RAISERROR are %s
for string parameters and %d
for integer
parameters. And not to forget %%
if you want to include a per-cent character in the message. There are a few more markers, and it is also possible to specify width and precision. Please refer to the RAISERROR topic in Books Online for the full details.
Using parameter markers is a mixed blessing, though. If you fat-fingeredly type:
RAISERROR('No such order %s.', 16, 1, @orderid)
Note that the parameter marker here is %s
, one key to the left of the correct one. If you are not ambitious enough to test all your error messages, it may not be until you are in production that you are looking at this unhelpful message:
Msg 2786, Level 16, State 1, Line 2
The data type of substitution parameter 1 does not match the expected type of the format specification.
An alternative is to format the message yourself. This is perfectly possible, since RAISERROR accepts variables for all its parameters.
DECLARE @msg nvarchar(2048) SELECT @msg = 'No such order ' + ltrim(str(@orderid)) + '.' RAISERROR(@msg, 16, 1)
For this simple message, this is bulkier, but for more complex string formation it is likely to be simpler than using parameter markers.
That said, one could argue that best practice is to always use '%s'
as the message parameter, and put the actual message as the fourth parameter. This example explains why.
DECLARE @msg nvarchar(2048) SELECT @msg = 'I want a 5% raise' RAISERROR(@msg, 16, 1) RAISERROR('%s', 16, 1, @msg)
The output is:
Msg 2787, Level 16, State 1, Line 3
Invalid format specification: '% raise'.
Msg 50000, Level 16, State 1, Line 4
I want a 5% raise
By using the second pattern, you avoid accidents if the message includes a per-cent character. For a message you have full control over, this may be over the top, but in a general routine like error_handler_sp this is certainly a concern, and I use this pattern in that procedure.
There are a couple of WITH clauses that goes with RAISERROR. The most commonly used is probably WITH NOWAIT. Normally, messages and result sets are buffered before they are sent to the client. WITH NOWAIT specifies that the message should be sent directly to the client without buffering. This is useful if you write a long-running script and you want some diagnostic output of the progress. Beware, though, that some APIs and tools may buffer on their side, thereby nullifying the effect of WITH NOWAIT. For instance, if you run this script in SSMS:
DECLARE @i int SELECT @i = 1 WHILE @i < 1000 BEGIN RAISERROR('Message %d', 0, 1, @i) WITH NOWAIT WAITFOR DELAY '00:00:00.100' SELECT @i = @i + 1 END
You will see that it continuously prints Message 1, Message 2 etc up to Message 500. From there it pauses and then prints them 50 at a time. This does not happen with SQLCMD, nor does it happen with the old Query Analyzer that shipped with SQL 2000.
Another option for RAISERROR is WITH LOG; this option causes the first 440 characters of the message to be written to the Windows Event Log. To use this option, the user must be a member of sysadmin or have the permission ALTER TRACE. (Yes, you read correctly, ALTER TRACE. Quite an odd application of that permission.)
You can specify any severity level from 0 to 25 with RAISERROR with the same result as messages produced inside SQL Server. That is, with level 0, tools will only print the message. With level 1 to 9, the command-line tools OSQL and SQLCMD do not print a header, but SSMS does. But for some curious reason, SSMS 2012 and later prints the message upside down. That is, the header line comes second.
RAISERROR('Drums are too noisy and there are no corners to hide in', 9, 1)
results in
Drums are too noisy and there are no corners to hide in
Msg 50000, Level 9, State 1
SSMS 2005 and 2008 have them in the normal order. In any case, levels 1 to 9 are not errors.
Level 10 only prints the message, since level 10 is forced to zero. And as long as you use a severity level ≤ 10, the message is always sent to the client and cannot be trapped with TRY-CATCH. If the severity is ≥ 11, this is an error and it will be caught by TRY-CATCH. To specify level 19 or higher you must add the clause WITH LOG and thus you need to be a member of sysadmin, or have ALTER TRACE permission, to use these levels. And if the level is 20 or higher, yes, the connection is closed. Good for pranks if nothing else.
Now, in practice, 99 % of the time you should use level 16. Level 0 (or 10) is good when you want to use WITH NOWAIT for diagnostic messages. If you don't care about WITH NOWAIT, use PRINT instead which is less bulky. The only other severity level that very occasionally may be useful is level 20 if you are writing an administrative script and want a quick exit. But be very careful. Maybe someone has set up alerts for high-severity levels. See below about using state as an alternative. I would also like to add a special warning for severity levels 17-19, because SSMS handles such messages differently, something I explain in the section Error Handling in Client Code at the very of end Part Three.
What about state, the third parameter to RAISERROR? Many people would just always pass 1 for state and be done with it. Nevertheless, if you produce the same error message in several places, you could use different values for the state parameter, so when you get an error message from a customer, you know where in your code the error message was produced. Observe that state numbers must be in the range 1 to 255. The state number 127 has a special function if you run a script in the command-line tools OSQL and SQLCMD: when they see this state they exit and set the DOS variable %ERRORLEVEL%, so you can test for the condition in a BAT file. SQL Server Management Studio does not react on state 127, not even in SQLCMD mode.
When you specify a message text with RAISERROR, the error number will always be 50000. However, you can also specify a number > 50000. In this case this should be a message that you previously have added with sp_addmessage as in this example:
EXEC sp_addmessage 51234, 16, 'My test message with my own parameter = %d' go RAISERROR(51234, -1, 1, 4711) go EXEC sp_dropmessage 51234
The output is:
Msg 51234, Level 16, State 1, Line 1
My test message with my own parameter = 4711
When you define the message, you also define the default severity which applies if you specify -1 for the severity to RAISERROR. If you specify a severity level ≥ 0 in the RAISERROR statement, this overrides the message definition.
sp_addmessage permits you to enter the same message number multiple times for different languages, so that English, Spanish, French, Russian users etc all can get error messages in their own languages. This is nothing I cover here, but I refer you to Books Online for details.
User-defined messages added with sp_addmessage is a very old feature in SQL Server, and it has not stood the test of time well. I definitely recommend against using it. I see three major problems:
If you want a central store of all error messages for documentation or internationalisation, I think it's better to use an application-specific table where the key is a mnemonic code. To raise errors, you call a stored procedure that you pass the code you have defined for the error. SqlEventLog that I present in Part Three features this ability and also includes the possibility to localise the messages.
If you for some reason want to emulate system errors from SQL Server, you can do this, but only for error numbers ≥ 13000. That is, if you run:
RAISERROR(15004, -1, 1) RAISERROR(1205, -1, 1)
You get the output:
Msg 15004, Level 16, State 1, Line 1
Name cannot be NULL.
Msg 2732, Level 16, State 1, Line 2
Error number 1205 is invalid. The number must be from 13000 through 2147483647 and it cannot be 50000.
The reason for this seemingly arbitrary limit is legacy. In very old versions of SQL Server the reserved range for system messages was only 1-12999.
Note: there is an older syntax for RAISERROR where you can only specify text and message number as in this example:
RAISERROR 50000 'Hilfe'
This syntax has been deprecated for a long time, and it was removed in SQL 2012. It goes without saying that you should not use it.
SQL 2012 added the command ;THROW. There are two forms of the command, of which we saw the first one already in Part One. That is,
;THROW
without any arguments. This form is only permitted between BEGIN CATCH and END CATCH. As we learnt in Part One, it reraises the original error exactly as-is, and if there are more than one original message all are reraised.
The second form of ;THROW is a stripped-down version of RAISERROR:
;THROW msgno, message, state
All parameters are mandatory. Here is an example:
;THROW 50000, 'The order does not exist', 1
The following can be noted:
;THROW 50000, 'The order %d does not exist', 1
;THROW @msgno, @msg, @state
Given that ;THROW with parameters is less versatile than RAISERROR without any particular simplicity in syntax, there is little reason why you would use it. This applies even more if you have an existing system where you already use RAISERROR, since the subtle differences between the commands is bound to cause confusion.
I said already in Part One that while Microsoft presents the command as THROW without the leading semicolon, you should always think of it as ;THROW. Here is why. Consider:
BEGIN CATCH IF @@trancount > 0 ROLLBACK TRANSACTION ;THROW END CATCH
Since semicolon really is a statement terminator, Microsoft thinks we should write:
BEGIN CATCH IF @@trancount > 0 ROLLBACK TRANSACTION; THROW END CATCH
But the semicolon is optional, and many people, including yours truly, write T‑SQL without terminating semicolons. Officially, this is deprecated, but as long as Microsoft does not offer any option to enforce the use of semicolons, that deprecation is void and meaningless. And, even if you have the habit of terminating your SQL Statements with semicolons, you can accidently happen to leave it out. Or delete one because you happened to press backspace when the cursor was in the wrong place. Thus, there are a number of ways you could end up with:
BEGIN CATCH IF @@trancount > 0 ROLLBACK TRANSACTION THROW END CATCH
And what do you think this is? A syntax error? Nope, it isn't. I have not covered this yet, but ROLLBACK TRANSACTION accepts an argument which may be an identifier. (We will look at this in the chapter Additional Error and Transaction Control.) The above is the same as
BEGIN CATCH IF @@trancount > 0 ROLLBACK TRANSACTION THROW END CATCH
If there is an active transaction, you will get this error:
Msg 6401, Level 16, State 1, Line 2
Cannot roll back THROW. No transaction or savepoint of that name was found.
An error message that masks the original error is extremely frustrating, because all you know is that something went wrong, but you don't know what or where. But it is way worse if there is no active transaction, since in that case the CATCH handler is completely silent, luring you to think that everything went well when it didn't. That is extremely bad.
And don't laugh. These accidents happen in real life. A fellow MVP told me that he had designed a template with ROLLBACK TRANSACTION and THROW for his developers. In his template, he had a semicolon after TRANSACTION, but somewhere along the way the semicolon was lost, and his organisation now has hundreds of stored procedures with ROLLBACK TRANSACTION THROW.
You may ask how could Microsoft let this happen? Microsoft could have prevented the issue by making THROW a reserved keyword. However, Microsoft avoids introducing new reserved keywords, since this would break existing applications using the new keyword as an identifier. And certainly, throw is not an entirely unlikely name for a stored procedure. Most of the time, adding a new command that is not a reserved keyword is not that big an issue. As long as the command has some complexity in the grammar, it is very likely that you will get a syntax error if you fail to terminate the previous statement with a semicolon. What is special with THROW is that can be used without arguments, which is why there is a much higher risk that it fits with the previous statement. And very unfortunately it fits well with the command which is most likely to precede it in a CATCH handler.
Remember what I said in the introduction that SQL Server is designed for maximum confusion and when they run out of ideas they do something with error handling? Here you see a prime token of this. Or more seriously, I completely fail to see why Microsoft wasted their resources on adding a new command which is a limited version of RAISERROR. (And of which the syntax is very similar to the old version of RAISERROR which they retired in the same version as they added ;THROW!) I also see little reason to mix terminology. Some languages uses "throw" as a metaphor to invoke exceptions, others like T‑SQL, uses "raise". So why throw in oranges among the apples?
Being able to reraise an error with a single statement is a good thing, but they could just have added a parameterless version of RAISERROR and this horrible trap would not have been there. To wit, RAISERROR is a reserved keyword, so it can never be interpreted as a transaction name.
Now when this bad design has made it into the language, it is unlikely that Microsoft would fix it by making a THROW reserved keyword. So the best we can do is to think of the command as ;THROW. Or avoid it altogether.
Just like ;THROW, PRINT is a limited form of RAISERROR, but more meaningful because of its simplicity. PRINT produces a message which has severity 0. The message number is always 0 and the state is always 1. PRINT takes a single argument which is the message text. What is nice with PRINT is that the message text does not have to be a constant or a variable; it can be an expression, so that you can say things like:
PRINT 'It took ' + ltrim(str(datediff(ms, @d, getdate()))) + ' ms.'
The expression cannot include subqueries, though.
In difference to RAISERROR and ;THROW, the per-cent character has no special function with PRINT. This produces what you would expect:
PRINT 'I want a 5% raise'.
PRINT is mainly a debug aid, and nothing you would use in production code. Note that the output from PRINT is buffered. If you want the output to display directly, you need to use RAISERROR WITH NOWAIT and specify severity = 0.
formatmessage() is a built-in function that performs the same formatting as RAISERROR and returns the formatted string. In SQL 2005 and SQL 2008 the first parameter must be a message number > 50000 that has been added with sp_addmessage. A possible use case for this is that you want to use the localisation feature of sp_addmessage and then store the message in a table rather than raising it as an error. Here is an example:
EXEC sp_addmessage 50001, 13, 'My test message with my own parameter = %d.' go DECLARE @msg nvarchar(2048) = formatmessage(50001, 4711) SELECT @msg go EXEC sp_dropmessage 50001
If this sounds far-fetched, well it is.
In SQL 2012 and later, the first argument to formatmessage() can also be a string. The idea is if you want to use ;THROW and also want the formatting capability of RAISERROR, you could use formatmessage():
DECLARE @msg nvarchar(2048) SELECT @msg = formatmessage('Order %d is missing', @orderid) ;THROW 50000, @msg, 1
Just be careful if you want to include a per-cent character in the string. You need to quadruple it:
DECLARE @orderid int = 1000 DECLARE @msg nvarchar(2048) SELECT @msg = formatmessage('Order %d is not 100%%%% correct!', @orderid) ;THROW 50000, @msg, 1
@@error is a system function that was the prime vehicle for error checking in SQL 2000 and earlier versions, that is, before TRY-CATCH entered the scene. This function returns the outcome of the most recently executed statement with 0 meaning success while a non-zero number means that an error with this number occurred.
"Most recently executed statement", means exactly that, even the most trivial (with two exceptions noted below). Consider
CREATE TABLE HungryFreaks(Daddy varchar(10) NOT NULL) INSERT HungryFreaks(Daddy) VALUES (replicate('X', 12)) IF @@error <> 0 PRINT 'Error ' + ltrim(str(@@error)) + ' occurred'
The output is:
Msg 8152, Level 16, State 14, Line 2
String or binary data would be truncated.
The statement has been terminated.
Error 0 occurred
The IF statement completes successfully, so when the PRINT statement executes, @@error returns 0. Because of this very volatile nature of @@error, you always had to capture @@error in a local variable, immediately after the statement. Thus, back in the days of SQL 2000, people had to litter their code with:
-- Some statement SELECT @err = @@error IF @err <> 0 BEGIN IF @@trancount > 0 ROLLBACK TRANSACTION RETURN 1 END
You could hardly see the trees for all that forest of error handling in those days. Thankfully, with the introduction of TRY-CATCH, there are very few situations where you need to use @@error. We will encounter one or two cases in this series of articles.
I have found two commands that do not affect @@error, the RETURN and GOTO statements. The output of
CREATE PROCEDURE BeSuchAFool AS INSERT HungryFreaks(Daddy) VALUES (replicate('X', 12)) RETURN go CREATE PROCEDURE HowCouldI AS EXEC BeSuchAFool GOTO DownThere PRINT 'Not reached' DownThere: PRINT 'Error ' + ltrim(str(@@error)) + ' occurred' go EXEC HowCouldI
is:
Msg 8152, Level 16, State 14, Line 2
String or binary data would be truncated.
The statement has been terminated.
Error 8152 occurred
Here @@error survives two statements after the failing INSERT. Note that it only applies to RETURN without an argument. Change RETURN
in BeSuchAFool to RETURN 12
, and @@error will now be 0. Have I already said that maximum confusion is a design goal for SQL Server?
I will close this section on @@error with a completely useless piece of trivia. There is a third WITH clause to RAISERROR, which sets @@error to 50000 even if the severity is < 11. For instance:
RAISERROR('Cause round things are, ...are boring', 8, 1) WITH SETERROR SELECT @@error AS [@@error]
produces:
Cause round things are, ...are boring
Msg 50000, Level 8, State 1
@@error
-----------
50000
I leave it as an exercise to the reader to find an actual use case for this. Send me a post card if you come up with something.
A stored procedure produces a return value that you can capture in this way:
EXEC @ret = some_sp
The return value is always an integer number. In theory, you could use it to return the id of an inserted row, as in this example:
CREATE PROCEDURE Deseri @par1 int, @par2, ... AS INSERT CheapThrils(col1, col2, ...) VALUES (@par1, @par2, ....) RETURN scope_identity()
But don't do that, because the convention is very much that the return value from a stored procedure is used for one thing: to indicate success or failure, with 0 for success and anything else indicating an error. Return the id value with an OUTPUT parameter or SELECT statement instead.
The RETURN statement takes a single argument which is an integer expression. If you don't provide any argument to RETURN, SQL Server will set an implicit return value. The same is true if you don't have any RETURN statement at all – but only if execution reaches the end of the procedure. To clarify what this means, let's look at some examples. We have this procedure:
CREATE PROCEDURE MotherPeople @int int AS IF @int = 2 SELECT * FROM NoSuchTable ELSE IF @int = 3 RETURN 89 ELSE IF @int = 4 RETURN NULL ELSE IF @int = 5 SELECT @int = @int / 0 PRINT 'Fin de procedure.'
To play with this procedure we use the batch below to call the procedure with different values. We set XACT_ABORT to OFF, because some of the examples generate an error and when XACT_ABORT is ON, there is no return value at all, as the entire batch is aborted.
SET XACT_ABORT OFF DECLARE @ret int SELECT @ret = 45 EXEC @ret = MotherPeople 1 SELECT @ret AS [@ret], @@error AS [@@error]
This batch produces:
Fin de procedure.
@ret @@error
----------- -----------
0 0
The procedure reached the final statement, and the return value is 0 since execution was successful. Now change the parameter to 'A', that is, an illegal value. This causes the call to the procedure to fail. The output is:
Msg 8114, Level 16, State 1, Procedure MotherPeople, Line 0
Error converting data type varchar to int.
@ret @@error
----------- -----------
45 8114
Since the call failed, no value is returned, and @ret retains its value. But @@error is set, so a conclusion of this is that if we want to do old-style error-checking, we need to check both @ret and @@error. (Because typically @ret would have the value 0 before the call. I explicitly set it to 45 in this example to demonstrate that it is left unchanged.)
Now change the parameter value to 2. This causes the procedure to attempt to access the non-existing table. While this is a run-time error, it is also a compilation error (I will discuss this in more detail later), and the effect is the same as when the call fails:
Msg 208, Level 16, State 1, Procedure MotherPeople, Line 3
Invalid object name 'NoSuchTable'.
@ret @@error
----------- -----------
45 208
@ret retains its value, but @@error is set.
When we pass 3 as the parameter value, the procedure enters the branch where there is a RETURN statement with an explicit value. This does not result in an error, and the output is:
@ret @@error
----------- -----------
89 0
Now we pass the value 4, which leads to a NULL value being returned. This has a funny consequence:
The 'MotherPeople' procedure attempted to return a status of NULL, which is not allowed. A status of 0 will be returned instead.
@ret @@error
----------- -----------
0 0
As you can see from the message, returning NULL is not legal. However, this does not yield an error message, only a warning. And, as the message says, the return value is 0, which indicates that execution was successful, although this is dubious at best. It would be corny to put RETURN NULL
in a stored procedure, but you may have something like RETURN @stat
and you have failed to initiate @stat. Did that procedure really execute successfully?
Don't jump to conclusion and think that the return value never can be NULL, because that can happen when you call procedures through a linked server, something I discuss in Appendix 1.
Finally, we call MotherPeople with the value 5 which results in a run-time error. Note that in this case, the execution reaches the end of the procedure. There is no RETURN statement. What is the output in this case?
Msg 8134, Level 16, State 1, Procedure MotherPeople, Line 7
Divide by zero error encountered.
Fin de procedure.
@ret @@error
----------- -----------
-6 0
Where did this -6 come from? When a procedure generates an error during execution, SQL Server sets the return value to ten minus the severity. In this example, the severity was 16, and whence the return value of -6. Note that @@error is 0, since one more statement was executed after the error, and this statement completed successfully.
This default error code is not very reliable. Consider:
CREATE PROCEDURE HotPoop AS EXEC MotherPeople 5 go DECLARE @ret int EXEC @ret = HotPoop SELECT @ret
In this case the return value is 0, because the error occurs in the nested procedure, not in HotPoop itself.
What if the error is caught locally in TRY-CATCH? Sometimes (not very often), it makes sense to attempt to first insert a row into a table, and if this fails with a primary-key violation instead run an UPDATE from the CATCH handler. In this case, execution is successful as far as you are concerned. But look at this example:
CREATE PROCEDURE BrownShoesDontMakeIt AS BEGIN TRY RAISERROR('This error is swallowed', 14, 1) END TRY BEGIN CATCH -- RETURN END CATCH go DECLARE @ret int EXEC @ret = BrownShoesDontMakeIt SELECT @ret
The value of @ret is -4. That is, in the opinion of SQL Server there was an error. The result is the same if you uncomment the RETURN statement.
The lesson of this is that you should not rely on these default return values, but if you want to use return values, you should always provide an explicit return value. That is, the procedure HotPoop should read:
CREATE PROCEDURE HotPoop AS DECLARE @ret int EXEC @ret = MotherPeople 5 RETURN @ret
Likewise, the CATCH handler in BrownShoesDontMakeIt should include RETURN 0
to hide the error.
As you can see, all this is quite confusing, and you can imagine how fun we had it way back in the days of SQL 2000. Thankfully, TRY-CATCH relieves us from most of these chores. There are a few situations when you need to look at return values:
The one thing you need to consider about RETURN in your own code is to always include a RETURN statement with an explicit return value last in your CATCH handler, just in case you are called by a procedure written by an old fart who have not caught up with modern times. This is particularly important if you work in a system with a lot of legacy code written for SQL 2000 and the system itself has not caught up.
These two functions return the number of rows affected by the most recently executed statement. The difference between the two is the data type: @@rowcount returns int while rowcount_big() returns bigint.
These functions must be used with uttermost care. Like @@error they are set by every statement. (With one exception from what I can tell: GOTO. In difference to @@error, they are affected by RETURN without a parameter.) That makes it imperative to capture the value into a local variable, Furthermore, you should put the statement where you do this directly adjacent to the statement it belongs to. For instance if you want to add a check that your UPDATE statement actually updated any rows, this is how you should do it:
UPDATE DukeOfPrunes SET col = @value WHERE keycol = @keyval SELECT @rowc = @@rowcount IF @rowc = 0 BEGIN INSERT DukeOfPrunes(keycol, col) VALUES (@keyval, @col) END
If you would put @@rowcount directly in the IF statement, someone might later put something between the UPDATE and IF. For instance, someone might decide to wrap the UPDATE in BEGIN TRY so that you have:
BEGIN TRY UPDATE DukeOfPrunes SET col = @value WHERE keycol = @keyval END TRY BEGIN CATCH -- Some error handling here END CATCH IF @@rowcount = 0 BEGIN INSERT DukeOfPrunes(keycol, col) VALUES (@keyval, @col) END
Entering or exiting a TRY-CATCH block affects @@rowcount, so in this example no rows would ever be inserted, no matter whether the UPDATE hit any rows. Exactly this happened in a stored procedure in the system I normally work with. Took us a while to figure out what was going on.
If you are fresh to SQL Server but have experience from other environments, you may have some expectations of what should happen if an error occurs during execution. You may think that if there is no CATCH handler execution will be aborted and any open transaction will be rolled back, at least if it was started in the batch that was aborted. You might also think that if there is a CATCH handler, that you should be able to take an alternate action and commit the transaction.
The true story is nothing like this. The most striking about error handling in SQL Server is that it is so inconsistent. Depending on the error, SQL Server may cut the connection, abort the batch, only terminate the current scope (i.e. stored procedure, UDF etc), terminate no more than the statement or really nothing at all. Likewise, the error may or may not roll back the transaction. And, as I covered already in Part One, not all errors can be caught in the scope they occur. Finally, in a CATCH handler, an open transaction may or may not be doomed. Recall that if the transaction is doomed, you cannot make further updates and you must roll back.
It's not that SQL Server rolls a dice every time an error occurs. No, every error situation has its well-defined behaviour, but if you want to implement a generic error handling that takes care of all unforeseen errors, it is dice-rolling for all practical purposes.
In the table below I have identified no less than eleven classes, as defined from four different properties:
For some classes the behaviour is different depending on whether SET XACT_ABORT ON is in effect or not. This results in this bewildering summary:
Without TRY-CATCH | With TRY-CATCH | |||||||
---|---|---|---|---|---|---|---|---|
SET XACT_ABORT | OFF | ON | OFF | ON | ON or OFF | OFF | ON | |
Class | Name | Aborts | Rolls Back | Catchable | Dooms transaction | |||
1 | Fatal errors | Connection | Yes | No | n/a | |||
2 | Batch-aborting | Batch | Yes | Yes | Yes | |||
3 | Batch-only aborting | Batch | No | Yes | Yes | No | Yes | |
4 | Statement-terminating | Statement | Batch | No | Yes | Yes | No | Yes |
5 | Terminates nothing at all | Nothing | No | Yes | Yes/No | Yes | ||
6 | Compilation: syntax errors | (Statement) | No | Yes | No | Yes | ||
7 | Compilation: binding errors | Scope | Batch | No | Yes | Outer scope only | No | Yes |
8 | Compilation: optimisation | Batch | Yes | Outer scope only | Yes | |||
9 | Attention signal | Batch | No | Yes | No | n/a | ||
10 | Informational and warning messages | Nothing | No | No | n/a | |||
11 | Uncatchable errors | Varying | Varying | No | n/a |
I should be fair and say that classes nine and ten are not really errors in the true sense, but I include them here for the sake of completeness. (And class nine is a very important one to understand.)
In the following, I will take a look at each of these classes, explain the behaviour in detail and give examples of such errors. I have already given the recommendation that your stored procedures should include the command SET XACT_ABORT, NOCOUNT ON. If you look at the table above, you see that with XACT_ABORT in effect, there is some higher level of consistency. For instance, the transaction is always doomed. In the following, I will show many examples where I set XACT_ABORT to OFF, so that you can get an understanding of why you should avoid this default setting.
These are errors that terminate your connection. They can certainly be confusing when they occur, because the only error message you might see is
Msg 0, Level 20, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
This is the typical error message you get when you run a .NET application. In a native application using ODBC, you may see something like:
TCP Provider: An existing connection was forcibly closed by the remote host.
Communication link failure
In the case of ODBC, the error message may not come in response to the failing batch but appear first when the application submits the next batch to SQL Server. (Because this is when the API realises that the connection is dead.)
It's important to understand that these errors are, with two exceptions, never your fault. We have already looked at one of these cases: RAISERROR with a severity level ≥ 20:
RAISERROR('Bye-bye!', 20, 1) WITH LOG
The other exception is if you engage in unsafe assemblies, extended procedures and the like. That is, you write misbehaving code that operates outside of the protected realm of T‑SQL and safe CLR assemblies. But as long as you stay inside this realm, fatal errors that abort the connection are due to reasons beyond your control. SQL Server or your client API terminates the connection because something unexpected happened during execution and they no longer have control of the situation why further execution would not be safe. In really bad cases, the entire SQL Server process crashes. Possible reasons for such errors are:
A fatal error normally generates a stack dump in the SQL Server error log. If there is no dump this indicates that the fatal error occurred outside SQL Server, for instance in the API. It is unlikely that the dump as such will help you to resolve the error. However, if you google some pertinent parts of it, you may be lucky to find a KB article that points to a fix for the issue.
You cannot catch fatal errors with TRY-CATCH or handle them in any other way in your SQL code. When you're dead, you're dead. From an error-handling perspective, a fatal error is the same thing as a power outage. That is, you don't need to write any code to handle the error as such, but you need to use a transaction for pieces of work that should be atomic, that is, carried out in full or not at all. (Note that I'm talking about your SQL code. Your client code should still handle a fatal error from SQL Server gracefully.)
...but SQL Server would not be SQL Server, if what I have said here had been 100 % true. In Part Three, we will see example of a level 23 error with DBCC CHECKDB that is catchable.
This is the strongest reaction SQL Server can take to a user error. These are errors that abort execution on the spot if there is no CATCH handler on the stack and they also roll back any open transaction. If there is a CATCH handler, the error is caught, but any open transaction is doomed and must be rolled back. The behaviour is the same, no matter whether XACT_ABORT is ON or OFF.
From many aspects, the behaviour when there is no CATCH handler is quite sound. Your code failed to fulfil its contract, and if you have no alternate plan (i.e. a CATCH handler), any future execution could lead you onto an unknown path where you could produce incorrect or inconsistent results that you return to the user or persist in the database. It is also important that the transaction you started is rolled back, so that your half-completed work is not committed by mistake later. You could argue that if the transaction was already in progress when the batch started running, the transaction should be left around. I will return to this sentiment in the next section.
What you may consider a pity is that the transaction is always doomed when you enter the CATCH handler, but that is just the way it works.
This is one of the two general classes for run-time errors, so many errors fall into this category. Here are a couple of examples:
SELECT convert(int, 'ABC')
.I defer comments on this selection until we have looked at statement-terminating errors.
Here is an example how you can identify that a certain error is batch-aborting:
SET XACT_ABORT OFF CREATE TABLE Stinkfoot (a int NOT NULL) BEGIN TRANSACTION INSERT Stinkfoot(a) VALUES (95) SELECT TOP 1 convert(int, name) FROM sys.columns PRINT 'We never arrive here' go PRINT '@@trancount is ' + ltrim(str(@@trancount)) + '.' SELECT a FROM Stinkfoot IF @@trancount > 0 ROLLBACK TRANSACTION DROP TABLE Stinkfoot
The output is:
(1 row(s) affected)
-----------
Msg 245, Level 16, State 1, Line 5
Conversion failed when converting the nvarchar value 'LineId' to data type int.
@@trancount is 0.
a
-----------
(0 row(s) affected)
The conversion error is batch-aborting, and therefore the PRINT statement is not executed. In the next batch, we check @@trancount which is 0. We also see that the row we inserted into the table Stinkfoot is not there.
When XACT_ABORT is ON this class behaves just like the previous: the batch if aborted is there is no CATCH handler, the transaction is rolled back and in a CATCH handler the transaction is doomed.
But when XACT_ABORT is OFF, the behaviour is different: the batch is still aborted if there is no CATCH handler, so far so good. But any open transaction is not rolled back, and in a CATCH handler the transaction is not doomed. That is, if you want to take an alternate path, you can do that. That could be considered a good thing, but since this is not extremely common, this is entirely overshadowed by the dangerous behaviour when you don't have a CATCH handler.
Say that a casual programmer writes a stored procedure that starts a transaction, and for whatever reason he does not use TRY-CATCH, and nor is he aware of SET XACT_ABORT ON. The procedure experiences an error in this class. The batch is aborted. The transaction is still alive. The application is unaware of that a transaction ever was started and just jogs along with the open transaction. With an "old-fashioned" client that stays connected all day long, this can cause severe problems. As the application performs more and more updates on the connection, it acquires more and more locks that it never releases and more and more processes are blocked. At end of the day, the user closes down the application and the transaction is finally rolled back – and all changes the user entered after the error occurred are lost.
The damage is less with a "modern" application that follows the pattern of connecting and disconnecting in every method. Under the covers, the API keeps the connection open and reuses the connection if there is a new connection request with the same connection string. In this case the orphaned transaction is rolled back when the connection is reused. The API physically closes the connection if there has not been any new connection attempt in some period of time, typically 60 seconds. While the damage is not equally bad as with an old-style application, 60 seconds of blocking can still cause quite some problems in a busy OLTP system.
You could argue that if the client started the transaction prior to submitting the batch, the behaviour from SQL Server is correct, as the client owns the transaction and should decide what should happen with it. Maybe roll back, may be take an alternate action and commit in the end. But sorry, this is hogwash. If the client called a stored procedure, the client has no idea what updates the stored procedure performed before the error occurred. All the client knows is that the stored procedure was not able to fulfil its contract. There is only one sensible action the client can take in this situation: roll back the transaction. Committing the transaction could lead to that the half-baked work of the stored procedure was persisted. Yes, in the case the client submitted a batch which it knows includes exactly one update operation, the client still has control over the situation. But cross your heart: how often do you write code where you need to continue the transaction after error? No, for a small benefit, Microsoft has opened a hole with big risk for damage. (A knowledgeable reader may at this point mutter something about savepoints. Savepoints are useless, but we will come to that later.)
So what errors can cause this bad behaviour? If you are on SQL 2005 or SQL 2008, you don't need to lose sleep over this, because to my knowing, this behaviour first appeared in SQL 2012 which introduced two such errors. One is when the contract defined by the feature EXECUTE WITH RESULT SETS, introduced in SQL 2012, is violated. Since such violations should be caught in test, it is not the most likely error to occur at run-time in a production system. Thus, you could say that it is not a very big deal.
The other error on the other hand is a big deal, since it is a general-purpose one. To wit, I'm talking about ;THROW:
SET XACT_ABORT OFF CREATE TABLE Apostrophe (a int NOT NULL) BEGIN TRANSACTION INSERT Apostrophe(a) VALUES (95) ; THROW 50000, 'The crux of the biscuit', 1 PRINT 'We never arrive here' go PRINT '@@trancount is ' + ltrim(str(@@trancount)) + '.' SELECT a FROM Apostrophe IF @@trancount > 0 ROLLBACK TRANSACTION DROP TABLE Apostrophe
The output is:
Msg 50000, Level 16, State 1, Line 5
The crux of the biscuit
@@trancount is 1.
a
-----------
95
The PRINT statement is not executed, that is, the batch was aborted, but in the next batch @@trancount is 1, and the value inserted into table Apostrophe is still there.
The behaviour is essentially the same if you use ;THROW to reraise an error:
SET XACT_ABORT OFF DECLARE @t float BEGIN TRY CREATE TABLE Nanook (RubsIt int NOT NULL) BEGIN TRANSACTION INSERT Nanook (RubsIt) VALUES (65) -- SELECT @t = sqrt(pi() - @@spid) SELECT @t = convert(tinyint, pi() - @@spid) END TRY BEGIN CATCH ; THROW END CATCH PRINT 'This does not print' go PRINT '@@trancount is ' + ltrim(str(@@trancount)) + '.' SELECT RubsIt FROM Nanook IF @@trancount > 0 ROLLBACK TRANSACTION DROP TABLE Nanook
The output is:
Msg 232, Level 16, State 1, Line 8
Arithmetic overflow error for type tinyint, value = -47.858407.
@@trancount is 1.
RubsIt
-----------
65
As above, the PRINT statement for This does not print was not executed. @@trancount was still 1 in the next batch and the data remained in Nanook. This does not mean that ;THROW in a CATCH handler always behaves this way. Rather it depends on the nature of the error. In the example above, uncomment the line with call to sqrt(), and the output changes to:
Msg 3623, Level 16, State 1, Line 6
An invalid floating point operation occurred.
@@trancount is 0.
a
-----------
Error 3623 is batch-aborting and dooms the transaction, so in this case the transaction is rolled back, and the row inserted inside the transaction is not there.
The behaviour of ;THROW is definitely nasty, and that's why I recommend against it. But to be fair, ;THROW is mainly nasty for the casual programmer. Once you understand how it behaves, you can fairly easily avoid the pitfalls.
BEGIN CATCH IF @@trancount > 0 ROLLBACK TRANSACTION ; THROW END CATCHAs I have said before, your CATCH handler should always roll back the transaction.
Before I move on to the next session, I should point out that later versions of SQL Server have added more errors in this category. I have not kept track of all, but there are a couple of errors in this class related to the new In-Memory OLTP engine, which I cover in a separate chapter.
This is the other main class of errors beside batch-termination with rollback. As long as XACT_ABORT is ON these two classes of errors behave the same: Without a CATCH handler they abort the batch, and any transaction is rolled back. In a CATCH handler an open transaction is doomed.
When XACT_ABORT is OFF, the behaviour is entirely different. The execution of the current statement is terminated and if the statement performs some change to the database, be that INSERT, ALTER TABLE, GRANT etc, the effect of that statement is rolled back. However, any open transaction is not rolled back. If there is no CATCH handler on the stack, execution continues on the next statement. If there is a CATCH handler, the error is caught. The transaction is not doomed, so you can take alternate actions or ignore the error and still commit.
Here is an example that demonstrates the principle:
CREATE TABLE Regyptian (Strut int NOT NULL CHECK (Strut > 0)) go SET XACT_ABORT OFF BEGIN TRANSACTION INSERT Regyptian (Strut) SELECT 1 INSERT Regyptian (Strut) SELECT 2 UNION ALL SELECT 9 UNION ALL SELECT -1 INSERT Regyptian (Strut) SELECT 10 COMMIT TRANSACTION go SELECT Strut FROM Regyptian
The output is:
(1 row(s) affected)
Msg 547, Level 16, State 0, Line 3
The INSERT statement conflicted with the CHECK constraint "CK__Regyptian__Strut__1920BF5C". The conflict occurred in database "tempdb", table "dbo.Regyptian", column 'Strut'.
The statement has been terminated.
(1 row(s) affected)
Strut
-----------
1
10
(2 row(s) affected)
Observe that the data for the first and last INSERT statements survived the transaction, whereas none of the rows from the second INSERT statement were inserted, despite that only one of the rows violated the constraint.
As the default behaviour, this is certainly questionable. A programmer who is unaware of that execution may continue after an error, may unknowingly implement a stored procedure that performs a partial update when an error occurs. Note that for this to happen, you would have to ignore using SET XACT_ABORT ON as well as TRY-CATCH.
However, it cannot be denied that there is a point in statement-termination as it permits you to take an alternate action in the few cases where you need this. This was particularly important in SQL 2000 and the days before TRY-CATCH for an action like "roll back and write to an error table". That is difficult to implement if the entire batch is aborted. With TRY-CATCH that action is now possible for all errors you can catch. Statement-termination still has the advantage that it permits you take an alternate action within the transaction and commit, provided that XACT_ABORT is OFF. This is not a very common scenario, but I have some examples how to implement this in Part Three.
As I mentioned this is a large class of errors. Here are some examples:
You may wonder if there is any particular pattern from which you can tell whether a certain error is batch-aborting or statement-terminating, but there isn't. There are some errors that are batch-aborting and doom the transaction because they should. The prime examples are deadlock and running out of database space. In these situations it would be pointless to continue in the same transaction.
But apart from these few examples, there is only one word that can be used to describe the situation: willy-nilly. I could probably write a full article about all the crazy inconsistencies, but I will try to restrain myself to a few examples. Consider:
SET XACT_ABORT OFF BEGIN TRANSACTION --SELECT convert(datetime, '201313133') SELECT convert(datetime, '20131313') PRINT 'If this prints, error was statement-terminating' go PRINT '@@trancount is ' + ltrim(str(@@trancount)) + '.' IF @@trancount > 0 ROLLBACK TRANSACTION
There are two SELECT statements where we try to convert a badly formed date. If you run the code above, you get:
Msg 242, Level 16, State 3, Line 4
The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
If this prints, error was statement-terminating
@@trancount is 1.
But if you uncomment the first SELECT, you get:
Msg 241, Level 16, State 1, Line 3
Conversion failed when converting date and/or time from character string.
@@trancount is 0.
A very similar error in nature, but yet this one is batch-aborting. In the previous section I had an example which featured two arithmetic errors:
SELECT @t = sqrt(pi() - @@spid) SELECT @t = convert(tinyint, pi() - @@spid)
The first aborts the batch, the second only terminates the statement.
The observant reader may already have observed another inconsistency. If you still have that procedure MotherPeople around, you can try this:
EXEC MotherPeople 'A' SELECT convert(int, 'A') PRINT 'Not reached'
The output is:
Msg 8114, Level 16, State 1, Procedure MotherPeople, Line 0
Error converting data type varchar to int.
-----------
Msg 245, Level 16, State 1, Line 2
Conversion failed when converting the varchar value 'A' to data type int.
It is the same error, but different error numbers – and different error behaviours!
Finally, I can't resist from sharing this case that had me confused as I was working with an example for this article.
CREATE PROCEDURE GiveMe @your int AS PRINT 'Dirty Love' go SET XACT_ABORT OFF BEGIN TRANSACTION EXEC GiveMe 1, 1 PRINT 'Error did not abort the batch. @@trancount = ' + ltrim(str(@@trancount)) EXEC GiveMe 1 EXEC GiveMe 1, 1 PRINT 'Error did not abort the batch. Or did it?' go IF @@trancount = 0 PRINT 'Transaction has been rolled back!' ELSE ROLLBACK TRANSACTION go DROP PROCEDURE GiveMe
As you see, there are two incorrect calls to the stored procedure, with one correct call in between. The output:
Msg 8144, Level 16, State 2, Procedure GiveMe, Line 0
Procedure or function GiveMe has too many arguments specified.
Error did not abort the batch. @@trancount = 1
Dirty Love
Msg 8144, Level 16, State 2, Procedure GiveMe, Line 0
Procedure or function GiveMe has too many arguments specified.
Transaction has been rolled back!
The first call only terminates the statement, and execution continues. But the second call terminates the batch. What is this, is a dice being rolled after all? No, the difference is that on the first invocation there is no plan for the procedure in the cache but the second time there is. Presumably, the same error is raised in different places of the code, and with different exit strategies.
To make it even more fun, this appears to be the only parameter error that can abort the batch. If you change the incorrect call to have no parameters at all, the batch is not aborted, nor if you pass a non-existing named parameter like EXEC GiveMe @our = 2
. And the most hilarious of them all: change GiveMe to be a parameterless procedure (and change the correct call to GiveMe accordingly). This is results in a different error number, and this error does not abort the batch when there is a plan in the cache!
There is a fairly small class of errors that do not terminate anything at all. Even if there is an error, the statement runs to the end without being rolled back, nor is any surrounding transaction affected. What is peculiar is that this behaviour is the same no matter if XACT_ABORT is ON or OFF, but execution always continues no matter the setting.
While the class is small, it comprises two quite important errors. But before we look at these, let's look at an example that demonstrates the principle:
SET XACT_ABORT ON CREATE TABLE Montana(MovingTo int NOT NULL) CREATE TABLE Dental(Floss int NOT NULL) BEGIN TRANSACTION DROP TABLE Dental, nosuchtable, Montana SELECT @@trancount, object_id('Montana'), object_id('Dental') IF @@trancount > 0 COMMIT TRANSACTION
This script produces:
Msg 3701, Level 11, State 5, Line 5
Cannot drop the table 'nosuchtable', because it does not exist or you do not have permission.
----------- ----------- -----------
1 NULL NULL
For this particular error you could say that the behaviour is practical. You want to get rid of the tables, and you don't care that nosuchtable is already gone. You may wonder what happens if there a CATCH handler? In that case, Dental is dropped, but not Montana. I have not investigated it very carefully, but it appears that DROP commands that accept a list behaves this way. In contrast, DROP TYPE (which does not accept a list) is batch-aborting:
SET XACT_ABORT OFF go CREATE TYPE Pygmy FROM int CREATE TYPE Pony FROM int go BEGIN TRANSACTION DROP TYPE Pygmy DROP TYPE nosuchtype DROP TYPE Pony go SELECT @@trancount, name FROM sys.types WHERE name IN ('Pygmy', 'Pony')
The SELECT statement lists both Pygmy and Pony. @@trancount is 0. I also tested DROP CERTIFICATE, another DROP command that does not accept a list. Dropping a non-existing certificate proved to belong to the class of statement-terminating errors. That is, execution continues on the next statement, but only if XACT_ABORT is OFF. Ain't SQL Server charming?
As I said that there are two important errors in this category that you need to have an understanding of. The first one is RAISERROR. Here is a variation of the script I used for ;THROW:
SET XACT_ABORT ON CREATE TABLE Uncle (Remus int NOT NULL) BEGIN TRANSACTION INSERT Uncle (Remus) VALUES (-56) RAISERROR('I''m raising an error!', 16, 1) PRINT 'We do arrive here' go PRINT '@@trancount is ' + ltrim(str(@@trancount)) + '.' SELECT Remus FROM Uncle IF @@trancount > 0 ROLLBACK TRANSACTION DROP TABLE Uncle
The output is:
Msg 50000, Level 16, State 1, Line 5
I'm raising an error!
We do arrive here
@@trancount is 1.
Remus
-----------
-56
That is, the batch is not aborted and the transaction is not rolled back despite that XACT_ABORT is ON. But observe this:
SET XACT_ABORT ON -- OFF BEGIN TRY BEGIN TRANSACTION RAISERROR('I''m raising an error', 16, 1) COMMIT TRANSACTION END TRY BEGIN CATCH PRINT CASE xact_state() WHEN 0 THEN 'There is no transaction' WHEN 1 THEN 'There is a living transaction' WHEN -1 THEN 'Doomed transaction' END IF @@trancount > 0 ROLLBACK TRANSACTION END CATCH
This prints Doomed transaction. That is, the transaction is always doomed when XACT_ABORT is ON. Change to OFF on the first line, and you get There is a living transaction.
I have previously cautioned against ; THROW and recommended RAISERROR. However, the true story is that with RAISERROR you also need to be careful, and casual use can lead to incorrect behaviour. Consider:
IF (SELECT IsActive FROM Orders WHERE OrderID = @orderid) = 0 RAISERROR('Order %d is not active.', 16, 1, @orderid)
If you are inside TRY-CATCH, this is safe, as execution will be transferred to the CATCH block. But if you don't use TRY-CATCH, execution continues and your code may make modifications to an inactive order. Thus, in this case you need:
IF NOT EXISTS (SELECT * FROM Orders WHERE OrderID = @orderid) BEGIN -- IF @@trancount > 0 ROLLBACK TRANSACTION RAISERROR('No such order %d.', 16, 1, @orderid) RETURN 1 END
The important part is RETURN 1; since execution continues with the next statement, you need a way out. If there is a calling stored procedure, that procedure may use TRY-CATCH, but that is nothing you should assume. By returning a non-zero status you indicate that you have failed. As for rolling back – this situation is arguably different from a CATCH handler where you handle errors that may have occurred about anywhere. When you perform this particular check, you presumably know whether you have started a transaction at this point or not. But if you started a transaction, you need to roll it back before you exit.
So with both RAISERROR and ;THROW both having questionable behaviour is there any reason to recommend the one over the other? The main problem, as I see it, is that they are two statements that perform the same task, but with subtle differences. This is bound to cause confusion. RAISERROR was in the product first, and that's why I favour it. When I have spoken to people at Microsoft about why they added ;THROW with parameters, the argument has been that some developers have been trapped by the behaviour of RAISERROR; they have expected RAISERROR to abort execution. (As if they would not expect that for a primary-key violation!) But I don't see how this is mitigated by adding a second command. The inexperienced developer will use one of the other, and probably expect them to behave the same. And even worse, ;THROW introduces new traps that can cause more damage than incorrect use of RAISERROR can do.
The other interesting error in this class is error 266. The text of this error reads: Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = %ld, current count = %ld. This error occurs when you exit a stored procedure, and @@trancount has a different value from when you entered the procedure. The stored procedure returns anyway, and execution continues on the next statement in the caller if there is no CATCH handler. It is important to have some understanding of error 266, because it often appears as consequence of other errors. Here is one example:
CREATE TABLE Poodle (Bites int NOT NULL) go CREATE PROCEDURE inner_sp @Frenchie int AS BEGIN TRY INSERT Poodle(Bites) VALUES (@Frenchie) END TRY BEGIN CATCH IF @@trancount > 0 ROLLBACK TRANSACTION EXEC error_handler_sp END CATCH go CREATE PROCEDURE outer_sp AS BEGIN TRANSACTION EXEC inner_sp NULL go EXEC outer_sp go
The output is:
(0 row(s) affected)
Msg 50000, Level 16, State 2, Procedure error_handler_sp, Line 20
*** [inner_sp], Line 3. Errno 515: Cannot insert the value NULL into column 'Bites', table 'tempdb.dbo.Poodle'; column does not allow nulls. INSERT fails.
Msg 266, Level 16, State 2, Procedure inner_sp, Line 0
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 1, current count = 0.
In this example, error 266 is just noise on the wire. It occurs because the inner procedure rolls back the transaction (which it is obliged to if the transaction is doomed). If your application presents and logs error from SQL Server, I would advise you that you filter out error 266 as long as there are other errors. To wit, if the only error you get back is error 266, something is really wrong. Look at this silly example:
CREATE PROCEDURE ImTheSlime AS BEGIN TRANSACTION BEGIN TRANSACTION -- Do something here COMMIT TRANSACTION go EXEC ImTheSlime PRINT 'Next statement' ROLLBACK TRANSACTION
Output:
Msg 266, Level 16, State 2, Procedure ImTheSlime, Line 0
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.
Next statement
The message indicates that the procedure is apparently badly coded and needs fixing. Variations on this theme include procedures that have too many COMMIT statements. And not to forget, procedures that roll back without raising any error.
There is one more odd thing about error 266. We have learnt that the errors that doom the transaction are the batch-aborting ones. But look at this:
SET XACT_ABORT OFF BEGIN TRY BEGIN TRANSACTION EXEC('BEGIN TRANSACTION') COMMIT TRANSACTION END TRY BEGIN CATCH PRINT CASE xact_state() WHEN 0 THEN 'There is no transaction' WHEN 1 THEN 'There is a living transaction' WHEN -1 THEN 'Doomed transaction' END IF @@trancount > 0 ROLLBACK TRANSACTION END CATCH
This prints Doomed transaction. This is not unique for error 266. Dropping a non-existing table is another error in this category that dooms the transaction. (But as noted above, RAISERROR is not.)
Before we move on, I can't escape from mentioning how Books Online describes error 266: they call it an informational error.
Already in Part One, I mentioned that compilation errors can occur at run-time due to deferred name resolution. In this section I will look more closely into how compilation errors occurring at run-time are handled in SQL Server.
It may not be an entirely accurate description, but for the sake of this article I will say that compilation in SQL Server consists of three phases: 1) Parsing. 2) Binding. 3) Optimisation. In the parsing phase the code is transformed to a syntax tree, but no metadata is accessed. Binding is the process where object names in the code are mapped to real objects. Finally, optimisation is the construction of the query plan. When you create a stored procedure, SQL Server only performs the first two phases. Optimisation does not take place until you invoke the procedure for the first time. Run-time errors can occur in all three phases. Of these, the most difficult ones to tackle are the binding errors.
Syntax errors at run-time mainly occur when you use dynamic SQL and construct it wrongly. To a great extent, producing a syntax error when you create your dynamic SQL is just like any other run-time error like division by zero or a PK violation. Except that there is one twist, as illustrated by this example:
SET XACT_ABORT ON go PRINT 'Starting' EXEC ('SELECT') PRINT 'Stopping'
The output is:
Starting
Msg 102, Level 15, State 1, Line 1
Incorrect syntax near 'SELECT'.
Stopping
That is, syntax errors behave like the errors in the previous section: the batch is not aborted, no matter the setting of XACT_ABORT. As long as you use TRY-CATCH this is not a big deal, but if you for whatever reason do not want to use TRY-CATCH and only rely on SET XACT_ABORT ON, you need to be careful if you use dynamic SQL. Proof-read the code that builds you dynamic SQL carefully and always use quotename() around object names to avoid that a special character in a table name wrecks your code. And if you want to be really sure, use old-style error handling with @@error.
When it comes to dooming the transaction, syntax errors are "normal". That is, they do not doom the transaction when XACT_ABORT is OFF.
Note: Even if you don't engage in dynamic SQL, you can still get syntax errors at run-time, although only in two particular scenarios:
These changes could cause a procedure that previously was functional to fail with a syntax error because use of deprecated syntax that has been discontinued or because of differences in reserved keywords. You can largely avoid such syntax errors from occurring in a live system by testing. For instance, you could invoke all stored procedures, user-defined functions and triggers directly. You may think that you need to find the correct parameter list for all procedures to do this, but SQL Server will find the syntax error before it complains about any missing parameters. Another approach is to build your database schema on the new version/compatibility level from scripts.
Imagine that you compile your C# file, and in a method you have:
NoSuchClass obj;
Imagine further that the C# compiler would be silent about the missing class when you compiled the program, and not until you came to this line in the code, C# would throw an exception. Does it sound completely corny to you? Welcome to the world of SQL Server where this can happen.
SQL Server has a feature known as deferred name resolution (DNR). When you create a stored procedure and there is a query that refers to a non-existing table, SQL Server abandons binding of that query without raising any error. When you invoke the procedure, SQL Server starts building a query plan, but abandons both binding and optimisation of the query with the missing table, again without raising an error. When execution reaches the statement, SQL Server makes yet an attempt to compile the statement, and if the table is still missing at this point, SQL Server finally raises an error for the missing table.
The arrangement is intended to cater for temp tables created inside the procedure, but the side effect is that if you misspell the name of a permanent table, you are not told until you run the stored procedure. While the behaviour is crazy, I will take it for a fact in this article. But if you want to read more about the absurdity as such and how it worked in SQL Server 6.5 before DNR was introduced, see my article on SET STRICT_CHECKS ON, a proposal to make SQL Server a much more robust programming environment.
Thus, here we are not going to upset ourselves over DNR as such. We are, however, going to upset ourselves over how run-time errors due to DNR are handled in SQL Server. Because it is simply, well, criminal. Look at what happens when there is no safety net at all, that is, no TRY-CATCH and XACT_ABORT is OFF:
CREATE PROCEDURE inner_sp AS BEGIN TRANSACTION PRINT 'inner_sp starting' SELECT col FROM doesnotexist PRINT 'This does not print' COMMIT TRANSACTION go CREATE PROCEDURE outer_sp AS PRINT 'outer_sp entry: @@trancount is ' + ltrim(str(@@trancount)) + '.' EXEC inner_sp PRINT 'outer_sp exit: @@trancount is ' + ltrim(str(@@trancount)) + '.' IF @@trancount > 0 ROLLBACK TRANSACTIOn go SET XACT_ABORT OFF EXEC outer_sp go DROP PROCEDURE outer_sp, inner_sp
The output is:
outer_sp entry: @@trancount is 0.
inner_sp starting
Msg 208, Level 16, State 1, Procedure inner_sp, Line 4
Invalid object name 'doesnotexist'.
Msg 266, Level 16, State 2, Procedure inner_sp, Line 4
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.
outer_sp exit: @@trancount is 1.
That is, when the statement with the missing table is reached and the error is raised, execution continues with the next statement in the caller. Did you ever see behaviour like that in any environment? Full stop of execution? Yes. Continuing on next statement and setting an error variable? Yes. Exception handling? Yes. But aborting the current scope and continuing in the caller? No, I've never seen that outside SQL Server. The ugliness does not stop there. Note that inner_sp starts a transaction, but that transaction is not rolled back. How likely is it that the caller is aware of that?
TRY-CATCH to the rescue! Or? Let's try it:
CREATE PROCEDURE inner_sp AS BEGIN TRY BEGIN TRANSACTION PRINT 'inner_sp starting' SELECT col FROM doesnotexist PRINT 'This does not print' COMMIT TRANSACTION END TRY BEGIN CATCH IF @@trancount > 0 ROLLBACK TRANSACTION PRINT 'Let''s catch the error: ' + error_message() END CATCH go CREATE PROCEDURE outer_sp AS PRINT 'outer_sp entry: @@trancount is ' + ltrim(str(@@trancount)) + '.' EXEC inner_sp PRINT 'outer_sp exit: @@trancount is ' + ltrim(str(@@trancount)) + '.' IF @@trancount > 0 ROLLBACK TRANSACTIOn go SET XACT_ABORT OFF EXEC outer_sp go DROP PROCEDURE outer_sp, inner_sp
Does the output change from the above? Yes:
outer_sp entry: @@trancount is 0.
inner_sp starting
Msg 208, Level 16, State 1, Procedure inner_sp, Line 5
Invalid object name 'doesnotexist'.
Msg 266, Level 16, State 2, Procedure inner_sp, Line 5
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.
outer_sp exit: @@trancount is 1.
The line numbers in the error messages changed. But that was all. The error was not caught in the CATCH handler, despite that the error occurred inside the TRY block. So in total there are three really nasty things with binding errors:
The first two points are easy to address: SET XACT_ABORT ON. With this setting, binding errors abort the batch and roll back the transaction. Change the any of the scripts above to use this setting and the output becomes:
outer_sp entry: @@trancount is 0.
inner_sp starting
Msg 208, Level 16, State 1, Procedure inner_sp, Line 5
Invalid object name 'doesnotexist'.
This is actually a small feat in itself, because normally Microsoft is very cautious about backwards compatibility. This was a change from SQL 2000 to SQL 2005 – in SQL 2000, binding errors were not affected by XACT_ABORT.
The last point is more difficult, though. As I discussed already in Part One, binding errors are not entirely uncatchable, you can catch them in an outer scope:
CREATE PROCEDURE inner_sp AS BEGIN TRY BEGIN TRANSACTION PRINT 'inner_sp starting' SELECT col FROM doesnotexist PRINT 'This does not print' COMMIT TRANSACTION END TRY BEGIN CATCH IF @@trancount > 0 ROLLBACK TRANSACTION PRINT 'Let''s catch the error: ' + error_message() END CATCH go CREATE PROCEDURE outer_sp AS BEGIN TRY PRINT 'outer_sp entry: @@trancount is ' + ltrim(str(@@trancount)) + '.' EXEC inner_sp PRINT 'outer_sp exit: @@trancount is ' + ltrim(str(@@trancount)) + '.' END TRY BEGIN CATCH PRINT 'outer_sp caught you: ' + error_message() IF @@trancount > 0 ROLLBACK TRANSACTIOn END CATCH go SET XACT_ABORT OFF EXEC outer_sp go DROP PROCEDURE outer_sp, inner_sp
The output from this script is:
outer_sp entry: @@trancount is 0.
inner_sp starting
outer_sp caught you: Invalid object name 'doesnotexist'.
But if you call a stored procedure from the client, there is no outer scope. And, well, if all you want to do is to get out of the procedure and communicate the error, you are good as long as you have SET XACT_ABORT ON in your procedure. The error is sent to the client, the transaction is rolled back, and execution does not continue. But if you wanted to log the error with SqlEventLog as I do in Part Three, you lose.
When TRY-CATCH first appeared in SQL 2005, I was so happy over the great improvement, that I could accept the restriction. But two or three releases later? No. I was not the first to react. Carlos Fatureto submitted a bug on the old Connect site for the issue in October 2009. Microsoft closed this as Won't Fix, suggesting that this is not a high-priority item. Which is kind of difficult to take seriously, given the time they wasted^H^H^H^H^H^Hspent on ;THROW. If you ask me, being able to reliably catch all errors is much more important than being able to reraise errors cleanly. After all, what benefit do you have from being able to reraise if you can't catch in the first place? Since Carlos's suggestion suffered such a grim fate, I submitted a second Connect item myself. Guess what happened? By Design. Obviously, Microsoft do not want error handling to be too simple. With the demise of Connect and move to Uservoice, I decided to give it a new try and created TRY-CATCH should always work. Votes are more than welcome! (Looking at in December 2020, I see that it has been Under Review for soon three years. Well, at least it is not closed!)
Note: there are two situations when a missing table can be a run-time error can be caught. One is when then table is on a linked server, and the table has been dropped since the plan was compiled, this is quite logical. I cover this more in the appendix on linked servers. The other situation is quite odd which I discuss in the chapter in the section Catching a Misisng Table in the Same Scope which is in the chapter Odd Situations.
Now, you may argue that this is not a big deal, because errors like misspelled table names should be caught early in the test cycle. I disagree on several accounts:
CREATE TABLE IncaRoads (keycol int NOT NULL PRIMARY KEY, Vehicle int NOT NULL) go CREATE PROCEDURE inner_sp AS CREATE TABLE #ChestersThing (keycol int NOT NULL PRIMARY KEY, OnRuth int NOT NULL) BEGIN TRANSACTION PRINT 'inner_sp starting' SELECT Vehicle, OnRuth FROM IncaRoads IR JOIN #ChestersThing CT ON CT.keycol = IR.keycol PRINT 'inner_sp stopping' COMMIT TRANSACTION go CREATE PROCEDURE outer_sp AS PRINT 'outer_sp entry: @@trancount is ' + ltrim(str(@@trancount)) + '.' EXEC inner_sp PRINT 'outer_sp exit: @@trancount is ' + ltrim(str(@@trancount)) + '.' IF @@trancount > 0 ROLLBACK TRANSACTIOn go SET XACT_ABORT OFF EXEC outer_sp
This code runs successfully in test and QA and is deployed to production:
outer_sp entry: @@trancount is 0.
inner_sp starting
Vehicle OnRuth
----------- -----------
(0 row(s) affected)
inner_sp stopping
outer_sp exit: @@trancount is 0.
Unfortunately, there is no code review. If you look at the code, you see there is case of bad practice: in a multi-table query, all columns should be prefixed with the table name or an alias, not only the ones needed to resolve current ambiguity. Some time later, due to new needs, the DBA later makes a change to IncaRoads:
ALTER TABLE IncaRoads ADD OnRuth datetime
Now when outer_sp is executed, this happens:
outer_sp entry: @@trancount is 0.
inner_sp starting
Msg 209, Level 16, State 1, Procedure inner_sp, Line 7
Ambiguous column name 'OnRuth'.
Msg 266, Level 16, State 2, Procedure inner_sp, Line 7
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.
outer_sp exit: @@trancount is 1.
Because of DNR, the ambiguity is not uncovered until the SELECT statement is reached.
So I maintain that binding errors certainly can appear at run-time even with good testing, although they may be less common than constraint violations or conversion errors.
Note: In a system I have worked with for many years, we rarely have problems with binding errors. This is because we load our stored procedures with AbaPerls, which goes at great length to nullify the effect of deferred name resolution. AbaPerls checks for missing tables, creates all temp tables defined in the procedure before loading the procedure itself. AbaPerls may not be for everyone, but I have made it available at http://www.sommarskog.se/AbaPerls/index.html.
Errors may also occur in the optimisation phase, and because of deferred name resolution, you may get optimisation errors in the middle of executing a stored procedure as well. They are less problematic for two reasons: 1) They are less common. 2) For some of them the behaviour is a little more civilised. Here is one example:
CREATE PROCEDURE Sofa AS BEGIN TRY CREATE TABLE #2 (a int NOT NULL) BEGIN TRANSACTION PRINT 'Und du bist mein Sofa' SELECT COUNT(*) FROM #2 a CROSS JOIN #2 b OPTION (HASH JOIN) PRINT 'This does not print' COMMIT TRANSACTION END TRY BEGIN CATCH PRINT 'Nor does this print.' END CATCH go SET XACT_ABORT OFF EXEC Sofa PRINT 'Nach der Sofa' go PRINT '@@trancount is ' + ltrim(str(@@trancount)) + '.' IF @@trancount > 0 ROLLBACK TRANSACTION go DROP PROCEDURE Sofa
The output is:
Und du bist mein Sofa
Msg 8622, Level 16, State 1, Procedure Sofa, Line 6
Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN.
@@trancount is 0.
What we see here is that just like binding errors, you cannot catch this error in the scope where the error occurs but only in outer scopes. However, in difference to binding errors, this error aborts the batch and rolls back the transaction (and dooms the transaction in a CATCH block), even if XACT_ABORT is off.
In this particular example I used an extremely stupid query hint to provoke the error. There are a few more situations where the query processor raises the white flag for whatever reason and aborts the batch and rolls back the transaction as a consequence, but they are not very common. A more common optimisation error may be that an index mentioned in a hint is missing:
CREATE TABLE Evelyn(arf int NOT NULL) go CREATE PROCEDURE ModifiedDog AS BEGIN TRY CREATE TABLE #Steinway(a int NOT NULL) BEGIN TRANSACTION PRINT 'Pondering the significance of short-person behaviour' SELECT * FROM Evelyn WITH (INDEX = doily_fringe) CROSS JOIN #Steinway PRINT 'This does not print' COMMIT TRANSACTION END TRY BEGIN CATCH PRINT 'Nor does this print.' END CATCH go SET XACT_ABORT OFF EXEC ModifiedDog PRINT 'After ModifiedDog' go PRINT '@@trancount is ' + ltrim(str(@@trancount)) + '.' IF @@trancount > 0 ROLLBACK TRANSACTION go DROP PROCEDURE ModifiedDog DROP TABLE Evelyn
This results in:
Pondering the significance of short-person behaviour
Msg 308, Level 16, State 1, Procedure ModifiedDog, Line 6
Index 'doily_fringe' on table 'Evelyn' (specified in the FROM clause) does not exist.
Msg 266, Level 16, State 2, Procedure ModifiedDog, Line 6
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.
After ModifiedDog
@@trancount is 1.
Here the behaviour is no different from proper binding errors. That is, the scope is aborted and the transaction is not rolled back when XACT_ABORT is OFF.
If you comment out the line CROSS JOIN #Steinway
, the output becomes:
Msg 308, Level 16, State 1, Procedure ModifiedDog, Line 6
Index 'doily_fringe' on table 'Evelyn' (specified in the FROM clause) does not exist.
After ModifiedDog
@@trancount is 0.
Without the temp table, there is no deferred named resolution and the error occurs directly when the procedure is invoked and the optimiser attempts to build the plan, which is why the procedure never starts executing.
The TDS protocol permits the client to send an Attention Signal to SQL Server which requests SQL Server to stop running the current batch. SQL Server always rolls back the current statement before terminating execution. If there is an open user transaction, SQL Server rolls back the transaction if XACT_ABORT is ON. If this setting is OFF, SQL Server will leave the transaction open, even if the transaction was started in the batch.
Run this from a window in SSMS:
SET XACT_ABORT OFF CREATE TABLE PoJama (People int NOT NULL) BEGIN TRANSACTION INSERT PoJama (People) VALUES (4) WAITFOR DELAY '00:01:00'
And while WAITFOR statement is running, press the red button in SSMS to cancel the query. This is sends an attention signal to SQL Server. Next run this:
SELECT @@trancount AS [@@trancount], People FROM PoJama IF @@trancount > 0 ROLLBACK TRANSACTION DROP TABLE PoJama
The output is:
@@trancount People
----------- -----------
1 4
As you see the transaction is still active. Change the first batch to start with SET XACT_ABORT ON, run it again and cancel it anew. Then rerun the second batch. This time the output is:
@@trancount People
----------- -----------
0 NULL
The transaction was rolled back this time.
This may seem like a trivial exercise, but there is a very serious implication. I don't know how often you press that red button, but I certainly do from time to time for various reasons. (Query never seems to complete, I realised that I have fired the wrong query etc.) If you run with SET XACT_ABORT OFF in your query window, you may not observe that you are still in a transaction. You will continue do things in your query window, and if you are on your dev box staying in the same query window you may not notice anything for quite some time. At some point you run into a real batch-aborting error, whereupon everything is rolled back. When your changes are no longer there, you are mightily confused. Consider next that you are on a production server. If you press the red button in the middle of a transaction without rolling back, you can cause quite an outage as your locks block other users.
Pressing the red button in SSMS is not the only way to produce attention signals. In a .NET application, you can use the Cancel method of the SqlCommand object to cancel execution. Other APIs expose similar methods. To my belief, explicitly cancelling batches is not very common in applications. However, APIs can produce attention signals without being explicitly asked to. Ever seen the message Timeout expired? This is not a message from SQL Server, but the timeout is in the client API. For reasons entirely unclear to me, most client APIs have a default timeout of 30 seconds, meaning that if SQL Server has not returned any data within this period, they send an attention signal to SQL Server to cancel execution and throw this error.
If you are not aware of this default timeout and its consequences, it can wreak havoc in your system. You have a stored procedure that processes some table one row and one transaction at a time. This procedure is invoked from a service every 20 minutes or so. Since it runs in the background, it's not a big deal that it runs for twenty seconds. As the volume in the system grows, the procedure runs a little longer, 25. 27, 28, 29 and one day – wham! 30 seconds and Timeout expired right when the stored procedure is in the middle of a transaction. The programmer that wrote the service was not aware of query timeouts and their repercussions and the service takes no particular action. The service stays connected and now you have an orphaned transaction holding locks. Maybe the service retries the procedure, whereupon @@trancount will bounce back and forth between 1 and 2, but it never goes back to 0. The process keeps its locks, and as more and more processes are blocked, the system grinds to a standstill.
There are two precautions to avoid this misery: always put SET XACT_ABORT ON in your stored procedures, and in your client code react to the timeout error (as well as to any SQL Server for that matter) by submitting:
IF @@trancount > 0 ROLLBACK TRANSACTION
But it does not really stop there. Ask yourself: do you need these timeouts? I suggest that unless you have an explicit requirement that queries must not run for more than a certain amount of time, set the command timeout to 0 (which means wait forever). And if you want to have a timeout, make a judicious decision of what is a good timeout. 30 seconds can be far too short – or be ridiculously long. Or to be blunt: that default of 30 seconds is a hole in the head.
You may ask, if you can trap an attention signal in your T‑SQL code, and the answer is: No, you can't.
Note: You can encounter timeouts in SQL Server itself. The command SET LOCK_TIMEOUT permits you configure for how long time (in milliseconds) you are prepared to wait for a lock. The default is -1, which means "wait forever". If a lock timeout elapses, SQL Server raises error 1222, Lock request time out period exceeded. This is a normal statement-terminating error that you can trap with TRY-CATCH. In SQL Server, there are also timeouts when accessing a linked server. I discuss this in Appendix 1.
Informational messages are messages with severity 0 to 9. They are always passed to the client, and you cannot trap them in SQL Server with TRY-CATCH, nor can you get hold of them with error_message() & co. They don't affect the execution flow in your stored procedure.
The most common informational message is probably the message you get when an aggregate sees a NULL value.
CREATE TABLE SanBerdino (zulch int NULL) INSERT SanBerdino (zulch) VALUES (1) INSERT SanBerdino (zulch) VALUES (NULL) SELECT SUM(zulch) FROM SanBerdino
This produces the message
Apparently this message is mandated by ANSI compliance. It is not produced when ANSI_WARNINGS is OFF (which is a legacy setting). Personally, I consider this message to be white noise, but it can be a nuisance.
There certainly are informational messages that are meaningful and worth paying attention to, for instance:
CREATE TABLE DarkGreen (Air nvarchar(200) NOT NULL, Choke nvarchar(200) NOT NULL, AllDay nvarchar(200) NOT NULL, CONSTRAINT pk_DarkGreen PRIMARY KEY(Air, Choke, AllDay) )
The maximum permitted length for a index key is 900 bytes, whence this warning:
Warning! The maximum key length is 900 bytes. The index 'pk_DarkGreen' has maximum length of 1200 bytes. For some combination of large values, the insert/update operation will fail.
There is one more noise message which is worth a short moment of attention, since it always come together with error messages. Consider:
CREATE TABLE Florentine (Pogen int NOT NULL) INSERT Florentine (Pogen) VALUES (NULL)
This results in:
Msg 515, Level 16, State 2, Line 2
Cannot insert the value NULL into column 'Pogen', table 'tempdb.dbo.Florentine'; column does not allow nulls. INSERT fails.
The statement has been terminated.
The last line The statement has been terminated is a separate message (with message number 3621) and it accompanies many errors. Maybe way back in SQL Server 1.0 it always indicated that the error was only statement-terminating and not batch-aborting. Today, this correlation is weak at best. Not all statement-terminating errors are accompanied by this message, while many DDL errors that are batch-aborting are accompanied by message 3621.
This is a small set of errors, or at least I hope so. In my opinion they are all anomalies, or to put it more bluntly: bugs. But when you point them out to Microsoft, they shrug their shoulders and they seem to have no interest to fix the issues.
The most prominent such example is DBCC CHECKDB. For this demo, you need a corrupt database. In case you don't have one around (and why would you?), you can go to SQL Server MVP Paul Randall's blog and download a couple to see this live. This batch:
DBCC CHECKDB (CorruptDataPurity) WITH NO_INFOMSGS PRINT '@@error is ' + ltrim(str(@@error)) + '.'
Produces this output:
Msg 2570, Level 16, State 3, Line 1
Page (1:24473), slot 91 in object ID 421576540, index ID 1, partition ID 72057594039697408, alloc unit ID 72057594044809216 (type "In-row data"). Column "Price" value is out of range for data type "float". Update column to a legal value.
CHECKDB found 0 allocation errors and 1 consistency errors in table 'Products' (object ID 421576540).
CHECKDB found 0 allocation errors and 1 consistency errors in database 'CorruptDataPurity'.
@@error is 2570.
If you play with SET XACT_ABORT ON, you will find that the final PRINT statement is executed even if XACT_ABORT is ON.
Now, add TRY-CATCH around it:
BEGIN TRY DBCC CHECKDB (CorruptDataPurity) WITH NO_INFOMSGS PRINT '@@error is ' + ltrim(str(@@error)) + '.' END TRY BEGIN CATCH PRINT 'This does not print' END CATCH
This results in:
CHECKDB found 0 allocation errors and 1 consistency errors in table 'Products' (object ID 421576540).
CHECKDB found 0 allocation errors and 1 consistency errors in database 'CorruptDataPurity'.
@@error is 0.
The CATCH handler is not activated, but the error is caught somewhere – and swallowed never to be seen again.
This certainly is an ugly bunch, because if you cannot rely on TRY-CATCH you are on very unsafe ground. For the issue with DBCC, I found a Connect item (which due to the demise of Connect is no longer accessible) filed by SQL Server MVP Tony Rogerson in March 2008. About a year later Microsoft responded and said that they might fix it in the next major version, but then they closed it as Won't fix and the outcome is the same in SQL 2014, so it hasn't happened yet. What is more worrisome, is that they say that there are other errors with this behaviour – but they don't say which.
With DBCC this is somewhat acceptable, because it is a well-defined context, but in what other situations can this happen? I cover those I know of in the chapter Odd Situations and also in some of the appendixes.
As for what these errors aborts, rolls back and whether they respect XACT_ABORT ON differ from error to error; there is no pattern to apply to all of them.
You cannot use TRY-CATCH in functions, as the construction is considered side-effecting. This is not that big deal, since you can always use TRY-CATCH in the stored procedure that invokes the function.
What is a little more of a nuisance is that error messages are not reported to have occurred in the function, but in the code that called the function, as shown in this example:
CREATE FUNCTION Orange (@County float) RETURNS float BEGIN DECLARE @LumberTruck float SELECT @LumberTruck = 8 / @County RETURN @LumberTruck END go CREATE FUNCTION Make (@a float) RETURNS @Jazz TABLE (NoiseHere float NOT NULL) AS BEGIN INSERT @Jazz (NoiseHere) SELECT 8/nullif(@a, 0) RETURN END go CREATE PROCEDURE DupreesParadise @a float, @b float AS SELECT dbo.Orange(@a) FROM dbo.Make(@b) go SET XACT_ABORT OFF EXEC DupreesParadise 0, 9 go EXEC DupreesParadise 9, 0
The output is:
----------------------
Msg 8134, Level 16, State 1, Procedure DupreesParadise, Line 2
Divide by zero error encountered.
----------------------
Msg 515, Level 16, State 2, Procedure DupreesParadise, Line 2
Cannot insert the value NULL into column 'NoiseHere', table '@Jazz'; column does not allow nulls. INSERT fails.
This also affects what error_procedure() and error_line() report. I once filed a bug about this, but Microsoft closed it as Won't Fix. Maybe I should have pointed out that this is a regression from SQL 2000, where the function names and line numbers are reported correctly.
There is one exception. If you call a scalar function with EXEC syntax, the error is reported correctly. That is, you can try:
DECLARE @val float EXEC @val = dbo.Orange 0 SELECT @val
And the output becomes:
Msg 8134, Level 16, State 1, Procedure Orange, Line 4
Divide by zero error encountered.
Yet a challenge in user-defined functions is that you cannot use RAISERROR or ;THROW to raise errors, as they are considered side-effecting. It is certainly advisable to perform validation of the input parameters of a user-defined function, but if there is no way to raise an explicit error, what can you do? One way is to execute a statement you know will produce an error, but the error message is unlikely to be helpful to the user. Indeed, when I played with Change Data Capture during the beta of SQL 2008 and used one of the CDC functions, I got an error message that did not make sense to me. But when I reported it as a bug, I was told this was the best they could do because of this limitation.
The best advice I can give in this area is to write functions that are so simple that error checking is not needed. It is beside the topic for this article, but I find that some developers are a bit too keen on using functions in SQL Server. Functions in SQL Server are generally quite restricted in what they can do. And most of all, be careful of putting data access in scalar functions, because that can be a serious performance killer when they are called repeatedly in a multi-row query.
When you are in a trigger, the rules change in some regards. Here I will only summarise what is specific for triggers, but I will not give any examples. Instead you find a more detailed discussion of these points together with examples and recommendations how to write triggers in the chapter Error and Transaction Handling in Triggers in Part Three. I have already mentioned some of the points below, but I repeat them for the sake of completeness.
When a procedure is executed as part of an INSERT statement, this introduces some changes in behaviour. As I mentioned earlier, even if there is no user-defined transaction, the procedure executes in the context of a system transaction defined by the INSERT statement just like triggers with the same caveat for xact_state on SQL 2005 and SQL 2008.
What is peculiar to INSERT-EXEC is that you are not permitted to use ROLLBACK TRANSACTION inside the procedure, but this produces an error. Here is an example:
CREATE TABLE CatholicGirls (Mary float NOT NULL) go CREATE PROCEDURE TelefunkenU47 @data int AS BEGIN TRY SELECT 1 / @data END TRY BEGIN CATCH PRINT '@@trancount in CATCH block is ' + ltrim(str(@@trancount)) + '.' IF @@trancount > 0 ROLLBACK TRANSACTION EXEC error_handler_sp END CATCH go SET XACT_ABORT OFF INSERT CatholicGirls (Mary) EXEC TelefunkenU47 0 PRINT 'This does not print' go PRINT '@@trancount after error is ' + ltrim(str(@@trancount)) + '.' SELECT Mary FROM CatholicGirls go DROP TABLE CatholicGirls DROP PROCEDURE TelefunkenU47
The output is:
@@trancount in CATCH block is 1.
Msg 3915, Level 16, State 0, Procedure TelefunkenU47, Line 7
Cannot use the ROLLBACK statement within an INSERT-EXEC statement.
@@trancount after error is 0.
Mary
----------------------
As seen from the example, error 3915 aborts the batch and rolls back the transaction – so the transaction was rolled back as desired. But the original error is lost, and losing the original error message is a serious impediment when troubleshooting. All you know is that something went wrong, but you have no idea of what.
What precautions can you take to avoid this? Not many. Obviously, if you know that you are writing a stored procedure that is to be used with INSERT-EXEC, you can skip the rollback in the CATCH handler. Then again, if you know beforehand that you need to share data between two procedures, there are better methods than INSERT-EXEC as I discuss in the my article How to Share Data Between Stored Procedures. As I see it, the main use for INSERT-EXEC is when you want to capture the output of a stored procedure that is not within your realm to change. And here lies the real problem: there is no reasonable way to tell that your stored procedure has been called as part of an INSERT statement.
Note that I said "reasonable". If you are dead set on it, you can run an INSERT-EXEC inside a local TRY-CATCH, and if you get error 8164, An INSERT EXEC statement cannot be nested, you were. Here is an example of you could do it. Note that here we have a TRY-CATCH handler nested inside the regular CATCH handler.
CREATE TABLE CatholicGirls (Mary float NOT NULL) go CREATE PROCEDURE TelefunkenU47 @data int AS BEGIN TRY SELECT 1 / @data END TRY BEGIN CATCH DECLARE @in_insert_exec bit SELECT @in_insert_exec = 0 DECLARE @t TABLE (a int NOT NULL) BEGIN TRY INSERT @t(a) EXEC('SELECT 1') END TRY BEGIN CATCH IF error_number() = 8164 SELECT @in_insert_exec = 1 END CATCH IF @@trancount > 0 AND @in_insert_exec = 0 ROLLBACK TRANSACTION EXEC error_handler_sp END CATCH go SET XACT_ABORT OFF SET NOCOUNT ON INSERT CatholicGirls (Mary) EXEC TelefunkenU47 0 PRINT '@@trancount after error is ' + ltrim(str(@@trancount)) + '.' SELECT Mary FROM CatholicGirls go PRINT 'Running TelefunkenU47 directly' BEGIN TRANSACTION EXEC TelefunkenU47 0 IF @@trancount > 0 ROLLBACK TRANSACTION go DROP TABLE CatholicGirls DROP PROCEDURE TelefunkenU47
The output is:
Msg 50000, Level 16, State 1, Procedure error_handler_sp, Line 20
*** [TelefunkenU47], Line 3. Errno 8134: Divide by zero error encountered.
@@trancount after error is 0.
Mary
----------------------
Running TelefunkenU47 directly
-----------
Msg 50000, Level 16, State 1, Procedure error_handler_sp, Line 20
*** [TelefunkenU47], Line 3. Errno 8134: Divide by zero error encountered.
Msg 266, Level 16, State 2, Procedure TelefunkenU47, Line 0
Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 1, current count = 0.
This time the original error message is retained and not dropped on the floor. I included a direct call to TelefunkenU47 to show that the CATCH handler performs the normal job in this case. It goes without saying that you need to be very paranoid to put this in every stored procedure that you write.
The script above runs with SET XACT_ABORT OFF. If you change it to ON, this adds error 3930, The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction, to the output. SQL Server has left us in a damned-if-you-do-damned-if-you-don't situation. The transaction is doomed, so it must be rolled back, but we are not permitted to roll it back. How sweet!
There is no problem with using COMMIT TRANSACTION in a procedure called through INSERT-EXEC, as long as the COMMIT is preceded by a matching BEGIN TRANSACTION, that is, you are only bouncing @@trancount up and down. If you have a lone COMMIT TRANSACTION that attempts to commit the system transaction defined by the INSERT statement, this is the same as with ROLLBACK. That is, it is not permitted and you get an error message.
Earlier in the article, I said that statements are always atomic. That is, an INSERT statement that attempts to insert 100 rows will insert 0 or 100 rows, never 12 or 89. OK, so I lied. With INSERT-EXEC this can happen with a stored procedure that produces multiple result sets. If there is an error with one result set, the result from the other result sets will still be inserted, as long as the error is not batch-aborting. Look at this example.
CREATE TABLE SyBorg (Joe int NOT NULL) go CREATE PROCEDURE Plook AS SELECT 1 SELECT 2 SELECT 3/0 SELECT 4 go SET XACT_ABORT OFF INSERT SyBorg (Joe) EXEC Plook go SELECT Joe, @@trancount FROM SyBorg go DROP TABLE SyBorg DROP PROCEDURE Plook go
The output is:
Msg 8134, Level 16, State 1, Procedure Plook, Line 4
Divide by zero error encountered.
(3 row(s) affected)
Joe
----------- -----------
1 0
2 0
4 0
(3 row(s) affected)
Maybe this behaviour could be construed as a bug, but I have never reported it, and given that it has worked this way for many versions, I doubt that Microsoft would change it. ...although, they have made one change in this area. If you make the SELECT statements into one with UNION ALL like this:
SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3/0 UNION ALL SELECT 4
On SQL 2005 this results in two rows in SyBorg (Joe = 1 and 2), but on SQL 2008 and later versions SyBorg is empty after the operation.
What if there is an error in the INSERT statement itself, for instance a NOT NULL violation? This does not raise any error in the procedure, but the procedure executes until the end. However, all data inserted into the table is rolled back, as seen in this example:
CREATE TABLE Dong (Work int NOT NULL) go CREATE PROCEDURE ForYuda AS BEGIN TRY SELECT 1 SELECT 2 SELECT NULL SELECT 4 PRINT 'Bartender, bring me a colada and milk' END TRY BEGIN CATCH PRINT 'On second thought make that a water, H-T-O' EXEC error_handler_sp END CATCH go SET XACT_ABORT ON INSERT Dong (Work) EXEC ForYuda go SELECT Work FROM Dong go DROP TABLE Dong DROP PROCEDURE ForYuda
The output is:
Bartender, bring me a colada and milk
Msg 515, Level 16, State 2, Line 2
Cannot insert the value NULL into column 'Work', table 'tempdb.dbo.Dong'; column does not allow nulls. INSERT fails.
Work
-----------
The message at the end of the TRY block is executed, which shows that the procedure runs to the end, despite that XACT_ABORT is ON. Nevertheless, the table is empty after the operation.
This section covers some behaviour with bulk load that you may not expect.
A very likely error situation with BULK INSERT and OPENROWSET(BULK) is that the data file is missing. This sounds like a true run-time error that should be statement-terminating or batch-aborting, and particularly, it should be easy to CATCH. But, no. Watch this:
CREATE TABLE Uncle (Meat varchar(200)) go CREATE PROCEDURE KingKong AS BEGIN TRY PRINT 'Trying to bulk' BULK INSERT Uncle FROM 'C:\temp\nosuchfile.txt' PRINT 'Bulk completed' END TRY BEGIN CATCH PRINT 'Caught: ' + error_message() END CATCH go EXEC KingKong
The output is:
Trying to bulk
Msg 4860, Level 16, State 1, Procedure KingKong, Line 4
Cannot bulk load. The file "C:\temp\nosuchfile.txt" does not exist.
As you see, the CATCH handler is bypassed. As it stands, a missing-file error for BULK INSERT and OPENROWSET(BULK) is a scope-aborting error, which only can be caught in an outer scope. But didn't I say that scope-abortion can only happen for compilation errors? Yes, and that is exactly what is happening: SQL Server attempts to read the data file at compile time. To do what? I'm afraid that I don't have a single good idea. It just does not make any sense. (It's a different matter for format files, at least for OPENROWSET(BULK), because the format file determines the column names used in the query.)
If you find this difficult to believe, run a Profiler trace, and include the events SP:StmtStarting, SP:StmtCompleted, Error:Exception and Error:User Error Message in the trace and run EXEC KingKong
again. You will see this:
The first thing that happens when the procedure starts running is an Exception event. The procedure then runs down to the BULK INSERT statement that generates a new Exception this time accompanied by User Error Message event. This first unaccompanied Exception event, is an error that occurs when the procedure is compiled, but which is suppressed in the spirit of deferred name resolution.
Note: The trace above is from SQL 2012. The text for the Exception events is different in SQL 2005 and SQL 2008. On these versions you don't see the message text, but instead you see the error number, severity and state.
To make this even crazier, I was able to trap the error of the missing file in the CATCH handler, but only in a very specific situation. To wit, the file exists when the procedure is compiled but is deleted before the BULK INSERT statement is executed. Here is a wretched example:
ALTER PROCEDURE KingKong AS BEGIN TRY PRINT 'Trying to bulk' EXEC xp_cmdshell 'DEL C:\temp\nosuchfile.txt', 'no_output' BULK INSERT Uncle FROM 'C:\temp\nosuchfile.txt' PRINT 'Bulk completed' END TRY BEGIN CATCH PRINT 'Caught: ' + error_message() END CATCH go EXEC xp_cmdshell 'ECHO "Dog Breath" > C:\temp\nosuchfile.txt', 'no_output' EXEC KingKong
The output this time is:
Trying to bulk
(0 row(s) affected)
Caught: Cannot bulk load. The file "C:\temp\nosuchfile.txt" does not exist
This time it was not a compilation error, which is why it was caught.
You may think it that it would be the same, if the file exists the first time the procedure is executed, because the next time SQL Server will use a cached plan for the procedure. But no, if we run EXEC KingKong
once more, this is our reward:
Trying to bulk
Msg 4860, Level 16, State 1, Procedure KingKong, Line 5
Cannot bulk load. The file "C:\temp\nosuchfile.txt" does not exist
This is because SQL Server does not cache plans for stored procedures that uses BULK INSERT or OPENROWSET(BULK), but such procedures are compiled on each execution. (This is briefly mentioned in this white paper.)
It is not uncommon to wrap BULK INSERT and OPENROWSET(BULK) in dynamic SQL, because the filename is not known until run time. This also has the advantage that you escape this weirdness. The batch of dynamic SQL is a scope of its own, and if you put TRY-CATCH around the call to EXEC() or sp_executesql that invokes the dynamic SQL you will be able to catch the error, since the CATCH is now in an outer scope vis-à-vis the bulk-load operation.
Just as I was wrapping up this series and doing the last rounds of proof-reading, Deepak Goyal asked a question on the MSDN forums that made me aware of behaviour with BULK INSERT that I had not observed. While surprising, it is logical, and there is a simple workaround. Consider this script:
CREATE TABLE WhippingPost (Ya int NOT NULL, Hozna int NOT NULL) EXEC xp_cmdshell 'ECHO 2;19 > C:\temp\InFrance.txt', 'no_output' EXEC xp_cmdshell 'ECHO 12;k >> C:\temp\InFrance.txt', 'no_output' go PRINT '-------- Without TRY-CATCH -----------' BULK INSERT WhippingPost FROM 'C:\temp\InFrance.txt' WITH (--MAXERRORS = 0, FIELDTERMINATOR = ';') PRINT '@@error = ' + convert(varchar(30), @@error) go PRINT '-------- With TRY-CATCH -------------' BEGIN TRY BULK INSERT WhippingPost FROM 'C:\temp\InFrance.txt' WITH (--MAXERRORS = 0, FIELDTERMINATOR = ';') PRINT 'This should not print. @@error = ' + convert(varchar(30), @@error) END TRY BEGIN CATCH EXEC error_handler_sp END CATCH PRINT 'This should print' go SELECT Ya, Hozna FROM WhippingPost go DROP TABLE WhippingPost
The script creates a table and then it writes a data file with one good row and one bad row that causes an error when it is imported with BULK INSERT. We make two attempts to load the file with BULK INSERT, one without TRY-CATCH and one with. After the attempt, we check if there is any data in the table. Here is the output:
--------Without TRY-CATCH-----------
Msg 4864, Level 16, State 1, Line 2
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 2 (Hozna).
@@error = 4864
--------With TRY-CATCH-------------
(0 row(s) affected)
This should not print. @@error = 0
This should print
Ya Hozna
----------- -----------
2 19
2 19
In the first attempt, without TRY-CATCH, there is an error message and @@error is set to an error number. But when we wrap the statement in TRY-CATCH, the error is not caught, there is no error message and @@error is set to 0. We can also note that in both cases, the good row was inserted in the table. If you play with SET XACT_ABORT, you will find that the outcome is the same no matter if it is ON or OFF.
The explanation lies with the option MAXERRORS that defines how many errors during the import we are prepared to accept. The default for this option is 10. (Of all values 10? That has been the default with BCP since the dawn of time, and the default was carried over to BULK INSERT.) That is, if there are at most 10 errors, the file is considered to have been loaded successfully. They do set @@error when there is no TRY-CATCH, but it is understandable that they decided not to fire the CATCH handler. You could argue that it is bad that they drop the error message on the floor, but BULK INSERT has an ERRORFILE option to that permits you specify a file where to write records that did not load.
If you don't want to accept any load errors at all, you should set MAXERRORS = 0, and the script includes two occurrences of this setting that are commented out. If you uncomment these, the behaviour is far less puzzling.
--------Without TRY-CATCH-----------
Msg 4864, Level 16, State 1, Line 2
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 2 (Hozna).
Msg 7399, Level 16, State 1, Line 2
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 2
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
--------With TRY-CATCH-------------
(0 row(s) affected)
Msg 50000, Level 16, State 2, Procedure error_handler_sp, Line 20
*** <dynamic SQL>, Line 3. Errno 7330: Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
This should print
Ya Hozna
----------- -----------
(0 row(s) affected)
The attempt without TRY-CATCH produces no less than three error messages. We can note that the message about the value of @@error is not there, so we conclude that the error now is batch-aborting.
Furthermore, in the attempt with TRY-CATCH, the error was caught in the CATCH handler. Since the error_xxx functions only can return one error message, we only see one of them, and unfortunately it is a generic message with no information what is wrong. But as I mentioned, you can use the ERRORFILE option to find the rows that did not load.
Finally, we can note that the table WhippingPost is empty this time. That is, if the good row was loaded at some point, it was rolled back.
Note: if BULK INSERT finds an incomplete record at the end of the file, this is considered an error that fails the load, no matter the setting of MAXERRORS. You can see this by commenting out MAXERRORS = 0 again, and add this line after the second call to xp_cmdshell:
EXEC xp_cmdshell 'ECHO 1212 >> C:\temp\InFrance.txt', 'no_output'
Linked servers is one of those deceivable features in SQL Server. It seems so simple: you add just more component to the table or the procedure name, and the immediate experience is that this is no different from accessing local objects. But already inside SQL Server you enter different code paths from when you access local objects, and then you cross barriers to access another SQL Server instance or into a completely different product like Oracle. Using linked servers raises the bar of complexity in more than one way, and this applies even more if you start a transaction that engage both data sources.
When I started to investigate what peculiarities there might be with error and transaction handling when using linked servers, I found that this is a very rough road. I made more findings than I reasonable could fit into this article. Rather, I have put this material in a separate article, labelled Appendix 1. The appendix is written so that you could start reading in at this point if you have a special interest in linked servers. I would suggest, though, that for most readers it will be better to complete Part Two and also do Part Three before you take in the appendixes.
A word of warning: if the material here in Part Two has made your head spinning, Appendix 1 will take your head-spinning to the next level – the error-handling behaviour with linked servers certainly is confusing.
If you use the CLR and execute SQL code over the context connection, this is another area where error and transaction handling is not so smooth. The situation is maybe not equally dire as with linked servers, but it is certainly bewildering enough.
Just like with linked servers, I have put the material on the CLR in a separate appendix, Appendix 2, which is divided into two halves. The first half covers a generic scenario, and is written so that you could start reading this part now if you have a special interest in CLR code. The second half of the appendix is an extension to Part Three and covers CLR-specific topics of some utilities that I introduce in this part.
In this chapter I cover commands and other means for error and transaction handling that you would use only occasionally. Or, when it comes to some of them, not at all.
As you may recall, XACT_ABORT is OFF by default. You can configure to have it ON by default in your environment, and in this section I discuss how you can do that – and whether you should.
SSMS offers the possibility to have some SET commands to be emitted when you connect. For a long time XACT_ABORT was not among them, but this changed with the 16.2 release of SSMS (the July release of 2016) as illustrated in this screenshot:
The option is not checked by default and as long as you don't check it, no SET command in either direction for XACT_ABORT is issued. (To be precise: when the checkbox first appeared in July 2016, it was checked by default. But since was a behavioural change with regards previous version, Microsoft changed the default in the subsequent release 16.3.)
If you are stuck with an older version of SSMS, you could put SET XACT_ABORT ON in a template. One way to do this is to get SSMS Tools Pack, authored by SQL Server MVP Mladen Prajdić. With SSMS Tools Pack, you can set up a template for a new query window.
You can also use the configuration option user options to have XACT_ABORT to be ON by default on server level. This option is a bit mask, and to have XACT_ABORT ON, you should set user options to a value where the bit corresponding to 16384 is set. For instance
EXEC sp_configure 'user options', 16384 RECONFIGURE
You can can also set this option from the UI in SSMS:
So to the question: should you utilise these options? When it comes to the server option, I recommend against it. Not because that the setting is bad such – au contraire, I would certainly had preferred if XACT_ABORT had been ON by default. The problem lies in manageability. You set it on one server. Later a database is moved to another instance, for instance, when you upgrade to a newer version of SQL Server, but on the new server the option is not set, because you and everyone else have forgotten about it. This could lead to behavioural changes in the application that are difficult to understand.
Setting it to be ON by default your from your local SSMS is a different matter, as this does affect the application behaviour. There are two advantages with this:
Or in short, having XACT_ABORT ON by default, reduces the risk that you cause a mess when you run things ad-hoc from the query window. This is not the least important in production environments.
However, there is a risk when you are running and testing stored procedures that you work with. You may find that they behave in error situations as you expect them to. But there may be no SET XACT_ABORT ON inside the procedure, so when an error condition occurs in the application (which may not emit SET XACT_ABORT ON when connecting), the behaviour is different and not the desired one. That is, having XACT_ABORT ON by default in SSMS may mask bugs in application code.
For this reason, I did not want to put a recommendation on using this option in Part One, as this part is directed also towards unexperienced developers. If you have come this far, I somehow assume that you are have more experience to be able to balance these risks against each other. Overall, I would say that if you frequently run things against production environments from SSMS, I definitely recommend that you run with this option set. On the other hand, if you mainly work in development environments to develop and test code, you may prefer to have it unset.
Normally you only say BEGIN, COMMIT or ROLLBACK TRANSACTION and that's that. But the commands have some variations that you may encounter once in a blue moon.
All three commands accept a parameter which can be an identifier or a character variable, at most 32 characters long. This parameter is a name for the transaction, so that you can say things like:
BEGIN TRANSACTION MyTrans -- Do some stuff. COMMIT TRANSACTION MyTrams
This may look useful, but it is not really so. If you look closer at the example, you will see that there the identifiers for BEGIN and COMMIT TRANSACTION are different. However, this does not result in an error, but this is perfectly legal, and the batch will execute and commit the transaction. Or more exactly: SQL Server flatly ignores the parameter you specify with COMMIT TRANSACTION. This script successfully inserts two rows into the table Zoot:
CREATE TABLE Zoot(Allures int NOT NULL) go BEGIN TRANSACTION INSERT Zoot(Allures) VALUES (89) COMMIT TRANSACTION BlackNapkins go SELECT @@trancount, Allures FROM Zoot go BEGIN TRANSACTION MsPinky INSERT Zoot(Allures) VALUES (189) COMMIT TRANSACTION go SELECT @@trancount, Allures FROM Zoot
It's a different matter with ROLLBACK TRANSACTION. Just like COMMIT, you can leave out any name you used with BEGIN TRANSACTION. However, if you specify a name with ROLLBACK TRANSACTION, this name must match the name that you specified for the outermost BEGIN TRANSACTION. This example illustrates:
SET XACT_ABORT OFF BEGIN TRANSACTION OuterTrans BEGIN TRANSACTION InnerTrans PRINT 'ROLLBACK TRANSACTION InnerTrans' ROLLBACK TRANSACTION InnerTrans PRINT 'ROLLBACK TRANSACTION OuterTrans' ROLLBACK TRANSACTION OuterTrans
Output:
ROLLBACK TRANSACTION InnerTrans
Msg 6401, Level 16, State 1, Line 5
Cannot roll back InnerTrans. No transaction or savepoint of that name was found.
ROLLBACK TRANSACTION OuterTrans
Not that I want to encourage you to use transaction names, but if you do, you should be aware of that the matching on transaction names is strictly binary, so the below results in an error, no matter your collation:
BEGIN TRANSACTION Discoboy ROLLBACK TRANSACTION DiscoBoy
It is the syntax with a named transaction that makes the THROW command so deceivable:
BEGIN CATCH IF @@trancount > 0 ROLLBACK TRANSACTION THROW END CATCH
Because the semicolon is missing, this will result in the message Cannot roll back THROW. No transaction or savepoint of that name was found – if there is a transaction to roll back. If there is no transaction, absolutely nothing will happen. Therefore, always write ;THROW. Always.
There are some more syntax variations: If you feel that your fingers are wearing out when you type TRANSACTION again and again, you might be relieved to know that SQL Server accepts TRAN as an abbreviation. For commit and rollback, you can go even further and leave out TRANSACTION entirely and say only COMMIT or ROLLBACK. Yet a possibility is to use COMMIT WORK or ROLLBACK WORK, a syntax that appears to exist in order for compliance with SQL-92. My own preference is to spell out TRANSACTION in full as I do in this article. You could argue though, that using ROLLBACK alone (or ROLLBACK WORK) is better, as this protects you against goofs with ;THROW. To wit, transaction names are only recognised with TRAN and TRANSACTION, so ROLLBACK THROW is not legal syntax.
Yet a variation is BEGIN DISTRIBUTED TRANSACTION which explicitly starts a distributed transaction, that is, a transaction that spans multiple data sources. You rarely need this command, as a regular transaction is promoted to a distributed transaction if you access a remote data source. For more details, see the appendix for linked servers.
With BEGIN TRANSACTION you can specify the option WITH MARK and if you use this option, you must specify a name for a transaction. This writes the transaction name as a mark to the transaction log, so that you later can restore the database to this point in time. There certainly are situations where this is useful, but it has little to do with regular error and transaction handling, which is why I don't cover this option further here, but refer you to Books Online for details.
SQL 2014 added a new clause that you can use with COMMIT TRANSACTION: WITH (DELAYED_DURABILITY = ON | OFF). This clause permits COMMIT to complete faster, at the risk of data loss if the server crashes. You can also use this clause with COMMIT WORK or COMMIT alone. The purpose of this clause has little to with error and transaction handling, so I refer you to the topic for COMMIT TRANSACTION in Books Online for further details.
While all T‑SQL programmers are very familiar with BEGIN, COMMIT and ROLLBACK TRANSACTION, many be unaware of the fourth member of the family: SAVE TRANSACTION.
SAVE TRANSACTION defines a savepoint in a user-defined transaction, and you can use ROLLBACK TRANSACTION to roll back to that savepoint. Here is an example:
SET NOCOUNT ON CREATE TABLE Black(Page int NOT NULL) INSERT Black(Page) VALUES(1) BEGIN TRANSACTION DrumSolo INSERT Black(Page) VALUES(2) SAVE TRANSACTION TheHardVersion INSERT Black(Page) VALUES (7) SAVE TRANSACTION EasyTeenageNewYorkVersion INSERT Black(Page) VALUES (17) SELECT Page FROM Black ROLLBACK TRANSACTION TheHardVersion COMMIT TRANSACTION SELECT Page FROM Black go DROP TABLE Black
The output from this batch is:
Page
-----------
1
2
7
17
Page
-----------
1
2
That is, first all four values 1, 2, 7 and 17 were inserted into the table, but after the rollback to the first savepoint, only 1 and 2 remains in the table. If you change the ROLLBACK command to read ROLLBACK TRANSACTION EasyTeenageNewYorkVersion
, the
output from the second SELECT will also include 7.
With SAVE TRANSACTION, it is compulsory to use a transaction name, either by specifying an identifier or a variable. To roll back to a savepoint, you need to specify that name as an argument to ROLLBACK TRANSACTION. A savepoint name can be used multiple times in the same transaction, in which case ROLLBACK will roll back to the latest occurrence of that name.
You can make multiple rollbacks to savepoints. In the example above, you could first roll back to EasyTeenageNewYorkVersion and then later roll back to TheHardVersion. However, once you have rolled back to the earlier savepoint, you cannot roll back to the later savepoint EasyTeenageNewYorkVersion.
SAVE TRANSACTION does not affect @@trancount.
When it comes to locks taken out between the savepoint and the rollback, SQL Server will release locks when possible. That is, if a bunch of row locks were escalated to table level after SAVE TRANSACTION, it will stay that way. Likewise, if there already was a shared lock on a resource that was converted to an exclusive lock after the savepoint, the lock will not be downgraded to shared.
Note: In older versions of Books Online the topic for SAVE TRANSACTION incorrectly says that locks are not released at all. It has been corrected for Books Online 2012 later that I reported it.
At first glance, SAVE TRANSACTION may seem like a very useful command. In a stored procedure we could check whether a transaction is active, and if it is we could create a savepoint instead of issuing a nesting BEGIN TRANSACTION. In case of an error, we could roll back to that savepoint, and not pull the rug for the caller's transaction. However, in practice SAVE TRANSACTION is useless. To wit, there are some very important restrictions:
In my career with SQL Server, it took over ten years before I ever used SAVE TRANSACTION – and it was for a very odd reason: I had a catch-22 situation I needed to resolve. I defined a savepoint, updated a value in a table, called a stored procedure that performed some checks of which one would have I failed if I had not done that update, and then I rolled back to my savepoint. Some time later, I decided to use SAVE TRANSACTION once more, this time exactly for its intended purpose: to maintain the caller's transaction. And I was burnt: this stored procedure could be called from a distributed transaction in another corner of our system, and I had to back out of it.
So while SAVE TRANSACTION may seem useful, it is an exotic command of very limited practical use.
If you issue a command without an active transaction in an ANSI-compliant database system, this starts an implicit transaction which you must explicitly commit or roll back. As you know, the default in SQL Server is auto-commit. That is, when there is no user-defined transaction, every statement is a transaction of its own.
If you want SQL Server to comply with ANSI, you can issue the command SET IMPLICIT_TRANSACTIONS ON. Here is an example:
SET NOCOUNT ON SET IMPLICIT_TRANSACTIONS OFF CREATE TABLE Flakes(Flakes varchar(40) NOT NULL) INSERT Flakes (Flakes) VALUES ('You ask ''em where''s my motor') PRINT 'After first INSERT, @@trancount is ' + ltrim(str(@@trancount)) + '.' SET IMPLICIT_TRANSACTIONS ON SELECT Flakes FROM Flakes PRINT 'After SELECT, @@trancount is ' + ltrim(str(@@trancount)) + '.' INSERT Flakes (Flakes) VALUES ('It was eaten by snakes') PRINT 'After second INSERT, @@trancount is ' + ltrim(str(@@trancount)) + '.' SELECT Flakes FROM Flakes PRINT 'After final SELECT, @@trancount is ' + ltrim(str(@@trancount)) + '.' ROLLBACK TRANSACTION SET IMPLICIT_TRANSACTIONS OFF SELECT Flakes FROM Flakes go DROP TABLE Flakes
This produces the output:
After first INSERT, @@trancount is 0.
Flakes
-----------
You ask 'em where's my motor
After SELECT, @@trancount is 1.
After second INSERT, @@trancount is 1.
Flakes
-----------
You ask 'em where's my motor
It was eaten by snakes
Flakes
-----------
You ask 'em where's my motor
After final SELECT, @@trancount is 0.
For the first INSERT statement, auto-commit is still in force. I turn on IMPLICIT_TRANSACTIONS, and already the SELECT statement starts an implicit transaction, as you can tell from @@trancount. It is followed by a second INSERT, which does not increase @@trancount further. The second SELECT shows that there are now two rows in the table Flakes, but the ROLLBACK undoes the second insert, since it was performed inside the transaction. When the final SELECT runs, IMPLICIT_TRANSACTIONS has been turned off, and auto-commit is back in the game, so this SELECT does not affect @@trancount.
Exactly what commands starts an implicit transaction? Books Online gives a
list, but this list is out of date and not accurate. For instance, it lists
ALTER TABLE explicitly but no other ALTER command, but at least ALTER PROCEDURE
starts an implicit transaction as do probably a few more ALTER commands. Likewise
it lists INSERT, UPDATE and DELETE, but not MERGE, which also starts an implicit transaction. CREATE is listed as a command on its own, but
while most CREATE commands are likely to start a user-defined transaction in
implicit transaction-mode, CREATE DATABASE does not. (The latter is perfectly consistent
with the fact that CREATE DATABASE is not permitted in user-defined
transactions.) In the example, we saw that SELECT starts an implicit transaction, but that only applies to SELECT statements that access a table (including a table variable). Something like SELECT @a = 1
does not start a transaction.
Way back in SQL Server 6.0, there were a number of points where SQL Server was in disagreement with the ANSI standard, and in SQL 6.5 Microsoft introduced SET commands to permit ANSI-compliant behaviour, and with the client APIs released since SQL 7, these SET options are on by default – with two exceptions, of which one is IMPLICIT_TRANSACTIONS. (The other is CURSOR_CLOSE_ON_COMMIT that I am not covering here.) Microsoft certainly did the right thing by sticking to auto-commit. A change to implicit transactions would have caused enormous problems with all one-statement stored procedures out there. But not only that: ANSI does not always have it right, and there is little to defend this mode as such. The first RDBMS that I worked with was DEC/Rdb, and Rdb had implicit transactions. I recall that our coding standards required that all code should include an explicit START TRANSACTION (as the command was called in Rdb) and we were not permitted to rely on implicit transactions.
Most client APIs offers explicit methods to start and commit transactions.
Not all APIs use BEGIN and COMMIT TRANSACTION for this purpose, but rather they
issue SET IMPLICIT_TRANSACTIONS ON. Why, I
don't know, but since they are careful to enter
IF @@trancount > 0 COMMIT
TRANSACTION
when you use their Commit method, there is no real damage.
There are a four SET options that control whether certain conditions are errors and the exact behaviour with these errors: ANSI_WARNINGS, ARITHABORT, ARITHIGNORE and NUMERIC_ROUNDABORT. Of these settings, ANSI_WARNINGS is ON by default in most contexts. ARITHABORT is ON by default in SQL Server Management Studio, but not in a regular client application, nor from SQLCMD or OSQL. ARITHIGNORE and NUMERIC_ROUNDABORT are never ON by default.
Let me say this directly: you should never fiddle with these options. ANSI_WARNINGS should always be ON, and some features in SQL Server will not work if this setting is OFF. Likewise, the same set of features will not work if NUMERIC_ROUNDABORT is ON. And as long as ANSI_WARNINGS is ON, the two options ARITHABORT and ARITHIGNORE have no effect. If you still want to know the details on these options, read on. Else just skip to the next section. :-)
Let me start with NUMERIC_ROUNDABORT. With this setting in effect, SQL Server considers it an error if you copy a decimal value with a higher scale into variable or a column with lower scale. Here is an example:
SET NUMERIC_ROUNDABORT ON DECLARE @d decimal(7,0) PRINT 'This prints' SELECT @d = convert(decimal(7, 2), 10) SELECT @d AS d2 SET NUMERIC_ROUNDABORT OFF
The output is:
This prints
Msg 8115, Level 16, State 7, Line 4
Arithmetic overflow error converting numeric to data type numeric.
d2
---------------------------------------
NULL
That is, this is a run-time error. But there is no real loss of precision here, since the value we try to put into @d is 10.00. Had the value been 10.02 there would indeed have been data loss. You could argue that there is still a loss of precision, since we no longer know that the decimals were .00. But in such case – shouldn't this be a compile-time error? Anyway, the option exists because of legacy, but stay away from it.
Let's now turn to ANSI_WARNINGS. If you issue SET ANSI_WARNINGS OFF, this leads to the following behavioural changes:
convert(datetime, '20010230')
.The real confusion is with the arithmetic errors. As long as ANSI_WARNINGS is ON, these errors are statement-terminating (unless XACT_ABORT is ON) with the exception of domain errors. If you turn off ANSI_WARNINGS but ARITHABORT is in effect, all arithmetic errors are now batch-aborting. If none of ARITHABORT and ARITHIGNORE is ON, arithmetic errors are not considered errors and the value of the expression is NULL. However, an informational message about the condition is produced. If ARITHIGNORE is ON, you don't even get a message for division by zero, overflow or conversion to (small)datetime – but domain errors still produce a message.
While this is highly bewildering, there is a simple cure as I said above: never fiddle with ANSI_WARNINGS. The setting ANSI_WARNINGS OFF is intended for legacy systems only, and there is absolutely no reason to use it in new development. Yes, it may be tempting to get rid of that warning, but as I said: treat it as white noise. Filter in the application if necessary. And, yes, you may prefer truncation over getting error 8152 in some situations. But you can handle that with substring(). And as long as ANSI_WARNINGS is ON, you don't need to bother about the other two options which belong in a musuem. Well, almost. ARITHABORT is a cache key, and the fact that it is ON by default in SSMS can trip you when you investigate performance problems as I discuss in my article Slow in the Application, Fast in SSMS?
Note: if the compatibility mode is 80, ANSI_WARNINGS does not control domain
errors, but an expression like sqrt(-1)
returns NULL, unless ARITHABORT is ON.
When you create a unique index, including a PRIMARY KEY or UNIQUE constraint, you can add the clause IGNORE_DUP_KEY. Normally, when you attempt to insert a duplicate value into a unique index, this is considered an error. But if the index was created with the IGNORE_DUP_KEY option no error is raised. The duplicate value is discarded and the informational message Duplicate key was ignored is produced. If the statement affects many rows, the legitimate values are inserted. If the input set itself includes duplicates, one of the source rows will be inserted, but which one is undefined. Here is an example:
CREATE TABLE Weasels (Ripped int NOT NULL PRIMARY KEY WITH (IGNORE_DUP_KEY = ON), MyFlesh varchar(20) NOT NULL) BEGIN TRY INSERT Weasels(Ripped, MyFlesh) SELECT 1, 'My Guitar Wants to' UNION SELECT 1, 'Kill Your Mama' SELECT * FROM Weasels END TRY BEGIN CATCH PRINT 'Not reached' END CATCH go DROP TABLE Weasels
The output is:
Duplicate key was ignored.
(1 row(s) affected)
Ripped MyFlesh
----------- --------------------
1 My Guitar Wants to
At first glance, this option may seem completely pointless, but it is actually one of the few useful features in this chapter. While using this option with a permanent table that holds persistent data would be a gross error in my opinion, it is a different matter with temp tables, table variables and staging tables where the set of operations may be confined to a single stored procedure. Here, you may encounter situations where IGNORE_DUP_KEY comes in handy. You may have a situation where key values are largely unique, but there are occasional duplicates, and you don't care which row you get. In this case, IGNORE_DUP_KEY not only makes the coding a little simpler, it is also likely to perform better than regular ways to remove duplicates such as such as DISTINCT or NOT EXISTS. Beware that as the frequency of duplicates increases, the performance benefit of IGNORE_DUP_KEY diminishes, as discussed this blog post from SQL Server MVP Alex Kuznetsov.
Here is another situation where IGNORE_DUP_KEY is useful. Say that you have an orders table, and in your stored procedure you update a couple of orders. Next you want to do something with the customers on these orders (send mail, update a column, whatever). You can capture the customers with the OUTPUT clause, but you may get duplicates if there is one more than order for the same customer. IGNORE_DUP_KEY to the rescue!
DECLARE @customers TABLE (customerid int NOT NULL PRIMARY KEY WITH (IGNORE_DUP_KEY = ON)) UPDATE orders SET ... OUTPUT inserted.customerid INTO @customers(customerid) WHERE ...
Finally, a small caveat: If you run the script above in the old Query Analyzer that ships with SQL 2000, but which you still can use to access SQL 2005 or later, the output is.
Server: Msg 3604, Level 16, State 1, Line 6
Duplicate key was ignored.
Ripped MyFlesh
----------- --------------------
1 My Guitar Wants to
It seems that the SQL Server ODBC driver (the API that Query Analyzer uses) on its own initiative labels this message as an error. Thus, if you suspect that your application is using this driver, you should not use IGNORE_DUP_KEY. (This is the ODBC driver that ships with Windows, but normally you should use the new ODBC driver SQL Server Native Client to make use of all features in recent versions in SQL Server, and SQL Server Native Client does not label message 3604 as an error.)
SQL 2014 introduced In-Memory OLTP, also known under its code name Hekaton, that permits you define tables that are entirely memory-resident, so-called memory-optimised tables. In this chapter I will look closer at the peculiarities from an error-handling point of view when you work with In-Memory OLTP and particularly natively compiled stored procedures where Microsoft has taken a completely new approach to error and transaction handling that is very interesting. You see, for once they have done something right!
If you are not at all familiar with In-Memory OLTP, this is not the really article where you will learn it. I will only cover as much needed to be able to discuss error and transaction handling and I will be silent on many things that are outside the scope for this article.
Note: to run the examples in this chapter, you need an instance of SQL Server 2014 or later. Note that in SQL 2014 and in the RTM version of SQL 2016, In-Memory OLTP was only available in Developer, Enterprise or Evaluation Editions. Starting with SQL 2016 SP1, In-Memory OLTP is available in all editions of SQL Server (possibly with the exception of LocalDB.) It is also available in most editions of Windows Azure SQL Database.
To create an in-memory table, the database must include a filegroup designated to contain memory-optimised data. Such a filegroup is a directory, just like a filegroup for filestream data. Here is a sample CREATE DATABASE statement (you will need to replace the paths with suitable paths for your system):
CREATE DATABASE hekaton ON (NAME = 'hekaton', FILENAME = 'F:\MSSQL\CATORCE\hekaton.mdf'), FILEGROUP hekaton_data CONTAINS MEMORY_OPTIMIZED_DATA (NAME = 'hekatondir', FILENAME = 'F:\MSSQL\CATORCE\hekatondir') LOG ON (NAME = 'hekaton_log', FILENAME = 'F:\MSSQL\CATORCE\hekaton.ldf')
It is perfectly possible to add a filegroup for memory-optimised data to an existing database. (But beware that once you have added it, you may find that it is not possible to remove it.)
To create an in-memory table, you need some extra syntax. Here is the example table what we will work with in this chapter:
CREATE TABLE Tinseltown (Rebellion int NOT NULL, Band datetime NOT NULL, CONSTRAINT pk_DowntownHollywood PRIMARY KEY NONCLUSTERED (Rebellion) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) go
The key is the clause MEMORY_OPTIMIZED = ON
. You also need to specify the durability. The example table is persisted, but you can also say DURABILITY = SCHEMA_ONLY
for tables where you don't need the data to be persisted over a server restart. Not all features that are supported with traditional tables are available with memory-optimised tables. For instance, while SQL 2016 added support for more data types, you can still not use the datatypes xml, datetimeoffset or CLR data types. You should also beware of that having a primary key in a memory-optimised table is mandatory and it is always non-clustered. (The only clusterted index you can put on a memory-optimised table is a columnstore index.)
As long as you work with in-memory tables from regular Transact-SQL, the situation for error handling is largely the same as with regular tables. This script will insert the first and the last rows into Tinseltown and raise an error for the second row:
SET XACT_ABORT OFF BEGIN TRANSACTION INSERT Tinseltown (Rebellion, Band) VALUES (1, '20090101') INSERT Tinseltown (Rebellion, Band) VALUES (1, '20140419') INSERT Tinseltown (Rebellion, Band) VALUES (3, '20121220') COMMIT TRANSACTION go SELECT @@trancount AS trancount, * FROM Tinseltown WITH (SNAPSHOT)
Note: When you access in-memory tables from regular T‑SQL, you often have to force the isolation level to snapshot, as in-memory tables do not support the default isolation level READ COMMITTED.
The output is:
Msg 2627, Level 14, State 1, Line 4
Violation of PRIMARY KEY constraint 'pk_DowntownHollywood'. Cannot insert duplicate key in object 'Tinseltown'. The duplicate key value is (1).
The statement has been terminated.
(1 row(s) affected)
trancount Rebellion Band
----------- ----------- -----------------------
0 1 2009-01-01 00:00:00.000
0 3 2012-12-20 00:00:00.000
The transaction inserts the first and third rows, despite that there was an error during the transaction. That is, it works the same way as with conventional tables.
In-Memory OLTP introduces natively compiled stored procedures. Traditionally, T‑SQL is an interpreted language, but a natively compiled stored procedure is compiled to a DLL, which significantly improves performance for certain types of operations. From a natively compiled stored procedure, you can only access memory-optimised tables. Overall the available feature set is quite limited, although they seem to add support for more SQL constructs with every release.
Error and transaction handling in Hekaton procedures is radically different from traditional T‑SQL. Here is an example:
CREATE PROCEDURE EasyMeat WITH NATIVE_COMPILATION, SCHEMABINDING AS BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='us_english') INSERT dbo.Tinseltown (Rebellion, Band) VALUES (11, '20070614') INSERT dbo.Tinseltown (Rebellion, Band) VALUES (11, '20060829') INSERT dbo.Tinseltown (Rebellion, Band) VALUES (13, '20141120') END
There are a lot of things never seen before here. There is a WITH clause in the procedure header with two mandatory components. NATIVE_COMPILATION is the key to specify that this is the new type of procedure. SCHEMABINDING specifies that all tables referred in the stored procedure must exist at compile time, and that the tables cannot be dropped as long as the procedure exists. This has a direct implication for error handling: this means that in a natively compiled stored procedure there is no deferred name resolution why you don't have to consider scope-aborting errors.
Note: if you are on SQL 2014, the WITH line needs to read:
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER AS
The clause EXECUTE AS OWNER was mandatory in SQL 2014, but this was changed with SQL 2016.
It is in the next line where things become really interesting. A natively compiled stored procedure must start with BEGIN ATOMIC. This statement defines a scoped transaction. That is, the extension of the transaction is from BEGIN ATOMIC to END, just like the extension of a TRY block is from BEGIN TRY to END TRY. If the execution leaves the block without raising an error, either by reaching END or through a RETURN statement, the transaction commits. On the other hand, if an error occurs, the transaction is rolled back, unless the error is caught in a TRY block and suppressed in the CATCH block. That is, execution never continues on the next statement in case of an error.
You cannot use the "classic" commands BEGIN, COMMIT, ROLLBACK and SAVE TRANSACTION in an atomic block – they simply do not have any place there. It could make sense is to nest atomic blocks, or to have an atomic block that encompasses only part of the stored procedure. This may be possible in other products that implements BEGIN ATOMIC, but in natively stored procedures in SQL Server there is a one-to-one relation between the atomic block and the stored procedure. That is, the block must encompass the entire procedure, and it can't nest. There certainly is a beauty in this. Occasionally, you can see inexperienced users in SQL forums that assume that a stored procedure defines an atomic unity of work. They are wrong, but the design of In-Memory OLTP shows that their expectations are not entirely off the mark.
In the BEGIN ATOMIC statement, you need to specify the isolation level for the transaction as well as the language. Optionally, you can also specify settings for DELAYED_DURABILITY, DATEFIRST and DATEFORMAT, see Books Online for details on these options.
After this lengthy introduction to atomic blocks, let's look at what happens when we run the procedure EasyMeat:
SET XACT_ABORT OFF EXEC EasyMeat PRINT 'After the procedure call' go SELECT @@trancount AS trancount, * FROM Tinseltown WITH (SNAPSHOT)
The output is:
Msg 2627, Level 14, State 1, Procedure EasyMeat, Line 5
Violation of PRIMARY KEY constraint 'pk_DowntownHollywood'. Cannot insert duplicate key in object 'Tinseltown'. The duplicate key value is (11).
trancount Rebellion Band
----------- ----------- -----------------------
0 1 2009-01-01 00:00:00.000
0 3 2012-12-20 00:00:00.000
The stored procedure is really atomic – none of the rows were inserted despite that XACT_ABORT was OFF. Recall that with a regular procedure, the first row with Rebellion = 11 and the row with Rebellion = 13 would have been inserted.
What if you actually want to ignore an error, and continue processing? In that case you need to use TRY-CATCH and not reraise the error in the CATCH block. As an example, say that you want to ignore primary-key violations, but still reraise any other error you may get. You have already seen the mechanisms to do this. To check for a certain error, you would use the error_number() function, and thankfully all the error_xxx() functions are implemented for Hekaton procedures. The best way to reraise errors in natively compiled stored prodcedures is the ;THROW statement. (RAISERROR is not supported.)
Here is how a procedure along these lines would look like:
CREATE PROCEDURE FineGirl @key int, @date varchar(23) WITH NATIVE_COMPILATION, SCHEMABINDING AS BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='us_english') BEGIN TRY INSERT dbo.Tinseltown (Rebellion, Band) VALUES (@key, @date) END TRY BEGIN CATCH IF error_number() <> 2627 BEGIN ;THROW END END CATCH END
We run this procedure as below:
SET XACT_ABORT ON EXEC FineGirl 3, '20091111 11:11:11' EXEC FineGirl 4, '20140414 14:14:14' SELECT @@trancount AS trancount, * FROM Tinseltown WITH (SNAPSHOT)
The output is:
trancount Rebellion Band
----------- ----------- -----------------------
0 1 2009-01-01 00:00:00.000
0 3 2012-12-20 00:00:00.000
0 4 2014-04-14 14:14:14.000
The row for @key = 3 was not inserted, and no error was raised, while the row for @key = 4 was inserted. Note here that we ran this example with XACT_ABORT ON. Had FineGirl been a traditional stored procedure, the plot would not have worked out, because when XACT_ABORT ON is on, all errors doom the active transaction. This is different for natively compiled stored procedures: here transactions are doomed only if there is a reason to. (More about this in just a second.)
Let's now test what happens when we call FineGirl is with an input that causes a different error than 2627:
SET XACT_ABORT OFF EXEC FineGirl 5, 'BadDate' EXEC FineGirl 6, '20170717 17:17:17' go SELECT @@trancount AS trancount, * FROM Tinseltown WITH (SNAPSHOT)
The output is:
Msg 241, Level 16, State 1, Procedure FineGirl, Line 5
Conversion failed when converting date and/or time from character string.
trancount Rebellion Band
----------- ----------- -----------------------
0 1 2009-01-01 00:00:00.000
0 3 2012-12-20 00:00:00.000
0 4 2014-04-14 14:14:14.000
As you see, no row was inserted. That is, the batch was aborted when the first error occurred. This is congruent what we have seen before: ;THROW always aborts the batch. But there is a difference. Had FineGirl been in traditional T‑SQL, the transaction would still have been active, since we were running with XACT_ABORT OFF. But once you escape the atomic block with an error, the transaction is rolled back. Always. This also applies if the client API submits an attention signal when execution is in the atomic block. In this case, the procedure will either run to completion or roll back. But it will never stop execution half-way in the procedure and leave the transaction open.
It is worth pointing out that in a natively compiled stored procedure, there is no need to add TRY-CATCH as a matter of routine in the way you should do in a traditional stored procedure. You only need TRY-CATCH if you want to ignore an error or take some alternate action within the transaction.
As we have seen, we can choose to ignore an error in a Hekaton procedure. But just like in traditional T‑SQL, a transaction may be doomed, meaning that it must be rolled back. The difference to traditional T‑SQL is that in a natively compiled stored procedure the set of errors that doom a transaction is small and well-defined, and they doom the transaction for a reason.
Essentially there are two such type of errors. One is resource errors like running out of memory for in-memory tables which results in an error with severity 17. I will not show an example of this, but I leave that as an exercise to the reader. (Just a warning: first set up a resource pool to limit the amount of memory available to in-memory tables, so that your table does not take up all memory in the instance. Been there, done that.)
The other type is concurrency errors. In-Memory OLTP is a lock- and latch-free architecture which relies on optimistic concurrency. Thus, deadlocks cannot occur, but two processes can still clash with each other. Here is an example. First in one window run this:
BEGIN TRANSACTION UPDATE Tinseltown WITH (SNAPSHOT) SET Band = '19181111' WHERE Rebellion = 1 UPDATE Tinseltown WITH (SNAPSHOT) SET Band = '19450508' WHERE Rebellion = 3
Then in a second window run:
CREATE PROCEDURE BamboozledByLove WITH NATIVE_COMPILATION, SCHEMABINDING AS BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='us_english') DELETE dbo.Tinseltown WHERE Rebellion = 3 END go SET XACT_ABORT OFF EXEC BamboozledByLove go SELECT @@trancount AS trancount, * FROM Tinseltown WITH (SNAPSHOT) IF @@trancount > 0 ROLLBACK TRANSACTION
The output in the second window is:
Msg 41302, Level 16, State 111, Procedure BamboozledByLove, Line 4
The current transaction attempted to update a record that has been updated since this transaction started. The transaction was aborted.
trancount Rebellion Band
----------- ----------- -----------------------
0 1 2009-01-01 00:00:00.000
0 3 2012-12-20 00:00:00.000
0 4 2014-04-14 14:14:14.000
So how can we tell that this error doomed the transaction? Well, there is this sentence at the end of the message, The transaction was aborted, but else, no, you cannot really tell from this example. However, let's see what happens if we try to silence the error entirely:
CREATE PROCEDURE UnderneathTheLawn WITH NATIVE_COMPILATION, SCHEMABINDING AS BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='us_english') BEGIN TRY DELETE dbo.Tinseltown WHERE Rebellion = 3 END TRY BEGIN CATCH END CATCH SELECT 'Still alive' INSERT dbo.Tinseltown(Rebellion, Band) VALUES (9, '19810824') END go SET XACT_ABORT OFF EXEC UnderneathTheLawn go SELECT @@trancount AS trancount, * FROM Tinseltown WITH (SNAPSHOT) IF @@trancount > 0 ROLLBACK TRANSACTION
This time the output is:
-----------
Still alive
Msg 41300, Level 16, State 109, Procedure UnderneathTheLawn, Line 8
The current transaction cannot be committed and cannot support read or write operations. Roll back the transaction.
trancount Rebellion Band
----------- ----------- -----------------------
0 1 2009-01-01 00:00:00.000
0 3 2012-12-20 00:00:00.000
0 4 2014-04-14 14:14:14.000
We got a different error message this time, and the gist of the message is that the transaction is doomed. Furthermore, the row we tried to insert at the end of the procedure is not present in the Tinseltown table, as it would have been if the error had been a regular error like a primary-key violation. In passing, we can note that the procedure continues its execution as far as the SELECT statement. However, at the INSERT there is a roadblock because of the doomed transaction. (If you put the SELECT after the INSERT, Still alive is not printed.)
The moral of this example is that it is a very bad idea to have an empty CATCH block. All we know here is that the transaction is doomed, but not why. Therefore you should always explicitly check for the errors you anticipate and want to ignore for some reason; never give a carte blanche for all errors, but reraise the unknown ones.
If you want to check explicitly for a doomed transaction this is not entirely straightforward, because the function xact_state() is not available in natively compiled stored procedures. Instead you need to check whether error_severity() is ≥ 17 (resource error) as well whether error_number() is any of the four concurrency errors listed in the topic Guidelines for Retry Logic for Transactions on Memory-Optimized Tables in Books Online.
There is little to argue with Microsoft's decision to make these errors dooming. A resource error is a fairly extreme situation and there is little point in trying to recover from it. As for the concurrency errors, there is certainly is a reason to retry in this case, but you must start over from the beginning, since these errors are due to changes that occurred after your transaction started running. (Or as in this example, a transaction that is left uncommitted.) Exactly when this can happen depends on the isolation level, which is beyond the scope for this article.
Before you move on, run ROLLBACK TRANSACTION in the window with the open transaction.
Starting with SQL 2016, natively compiled procedures can call each other. (You cannot call procedures in traditional T‑SQL from Hekaton procedures.) This exhibits a very interesting and very nice behaviour. When the inner procedure is entered, this defines an implicit savepoint. In the previous chapter, I concluded that SAVE TRANSACTION and savepoints are useless. But this is all different with natively compiled procedures, since the norm is that errors do not doom the transaction. Thus, if the inner procedure performs some updates and later fails, the transaction is then rolled back to the savepoint. And as long as the error is not one of the dooming ones, the outer procedure can decide to ignore the error to take an alternate action or whatever that complies with the business logic.
Let's look at an example. First we have an inner procedure that inserts two rows into the Tinseltown table, and in between the INSERT statements it returns what is in Tinseltown so far. Create this procedure first; we will also use it in the next section.
CREATE PROCEDURE PickMeImClean @key int, @month char(6), @day char(2) WITH NATIVE_COMPILATION, SCHEMABINDING AS BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='us_english') INSERT dbo.Tinseltown (Rebellion, Band) VALUES (@key, @month + '01') SELECT 'PickMe', Rebellion, Band FROM dbo.Tinseltown ORDER BY Rebellion INSERT dbo.Tinseltown (Rebellion, Band) VALUES (@key + 1, @month + @day) END
Next we have an outer procedure and a batch to call this procedure followed by a SELECT of @@trancount and the contents of Tinseltown. Finally there is some cleanup so that we can run the script again. Note that to create the procedure below you need to have SQL 2016 or later; it will not compile on SQL 2014.
CREATE PROCEDURE ForTheYoungSophisticate WITH NATIVE_COMPILATION, SCHEMABINDING AS BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE='us_english') BEGIN TRY EXEC dbo.PickMeImClean 11, '201209', '09' INSERT dbo.Tinseltown(Rebellion, Band) VALUES (15, '20160815') EXEC dbo.PickMeImClean 21, '201211', '31' INSERT dbo.Tinseltown(Rebellion, Band) VALUES (25, '20160825') END TRY BEGIN CATCH SELECT error_message() AS "Ignored error" -- ; THROW END CATCH INSERT dbo.Tinseltown(Rebellion, Band) VALUES (29, '20160829') END go SET XACT_ABORT ON EXEC ForTheYoungSophisticate go SELECT @@trancount AS trancount, * FROM Tinseltown WITH (SNAPSHOT) ORDER BY Rebellion DELETE Tinseltown WITH (SNAPSHOT) WHERE Rebellion > 10 DROP PROCEDURE ForTheYoungSophisticate
ForTheYoungSophisticate calls PickMeImClean and after each call it insert rows directly into the Tinseltown table. With the given parameters, we expect the first call to PickMeImClean to succeed, while the second will call fail because 2012-11-31 is not a legal date. There is a CATCH block that merely returns the error message as a result set without reraising it. After the CATCH block, ForTheYoungSophisticate inserts one more row into Tinseltown. We run ForTheYoungSophisticate with XACT_ABORT ON.
This is the output:
Rebellion Band
-------- ----------- -----------------------
PickMe 1 2009-01-01 00:00:00.000
PickMe 3 2012-12-20 00:00:00.000
PickMe 4 2014-04-14 14:14:14.000
PickMe 11 2012-09-01 00:00:00.000
Rebellion Band
-------- ----------- -----------------------
PickMe 1 2009-01-01 00:00:00.000
PickMe 3 2012-12-20 00:00:00.000
PickMe 4 2014-04-14 14:14:14.000
PickMe 11 2012-09-01 00:00:00.000
PickMe 12 2012-09-09 00:00:00.000
PickMe 15 2016-08-15 00:00:00.000
PickMe 21 2012-11-01 00:00:00.000
Ignored error
---------------------------------------------------
The conversion of a char data type to a datetime data type resulted in an out-of-range value.
trancount Rebellion Band
----------- ----------- -----------------------
0 1 2009-01-01 00:00:00.000
0 3 2012-12-20 00:00:00.000
0 4 2014-04-14 14:14:14.000
0 11 2012-09-01 00:00:00.000
0 12 2012-09-09 00:00:00.000
0 15 2016-08-15 00:00:00.000
0 29 2016-08-29 00:00:00.000
The first two result sets comes from inside of PickMeImClean, and we can note that on the second occasion, we have added rows with Rebellion = 11, 12, 15 and 21. In the final output 11, 12, and 15 are still there, while 21 is not, on the other hand 29 is.
That is, when the second call to PickMeImClean failed, the insertion of 21 was rolled back, but the rows inserted earlier in the transaction were not. ForTheYoungSophisticate discarded the error and was able to insert a final row. All this with XACT_ABORT ON, but XACT_ABORT has no effect in natively compiled stored procedures.
If you uncomment the row with the ;THROW statement and run again, you will find that the final output only has rows 1, 3, and 4. That is, the rows that were in Tinseltown when the batch started. This time ForTheYoungSophisticate failed since it reraised the error, and it too was rolled back.
Note: Beware that if you are on SQL 2016, there is a bug in this area. If you comment out the row that inserts Rebellion = 15 as well as the ;THROW, you would expect the same result as above modulo row 15. However, when the calls to PickMeImClean come in direct succession, rows 11 and 12 are also rolled back, and only row 29 is added. I have reported this bug to Microsoft, and it has been corrected in SQL 2017. However, when testing in SQL 2016 SP1 CU7 (the latest CU7 of this writing), I find that the issue still exists in this build.
Starting with SQL 2016, you can also add natively compiled triggers on memory-optimised tables. If such a trigger fails, this rolls back whatever updates the trigger has performed, as well as the updates performed directly by the statement that fired the trigger ...unless the error was caught in a CATCH handler and ignored. I don't show any examples of this but leave it as an exercise to the reader to explore this.
So far we have looked at nesting natively compiled procedures. It is possible to call a natively compiled procedure from an outer transaction started in traditional T‑SQL, as long as this is not a distributed transaction (and no savepoints have been defined with SAVE TRANSACTION). The behaviour inside the natively compiled procedure is the same: the start of the atomic block creates a savepoint, and if the procedure fails, the transaction is rolled back to the savepoint, unless the error is one of the few that dooms a transaction. But what behaviour will we get on the outside in traditional T‑SQL?
Here is the example reminiscent of the one in the previous section:
CREATE TABLE TheBlue(Light varchar(40) NOT NULL) go CREATE PROCEDURE DontLetYourMeatLoaf AS BEGIN TRANSACTION INSERT TheBlue(Light) VALUES ('Hehehehe!') EXEC PickMeImClean 11, '201209', '09' EXEC PickMeImClean 21, '201209', 'AB' INSERT TheBlue(Light) VALUES ('Check out my band aid!') COMMIT TRANSACTION go SET XACT_ABORT OFF EXEC DontLetYourMeatLoaf go SELECT @@trancount AS trancount, * FROM Tinseltown WITH (SNAPSHOT) ORDER BY Rebellion SELECT Light FROM TheBlue IF @@trancount > 0 ROLLBACK TRANSACTION go DROP PROCEDURE DontLetYourMeatLoaf DROP TABLE TheBlue
This script first creates a plain disk table TheBlue which is followed by a traditional stored procedure, DontLetYourMeatLoaf which first starts a transaction and inserts a value into TheBlue. Next, it calls PickMeImClean (introduced in the previous section) twice, the first time with good values and the second time with bad ones. At the end of the procedure, a second row is written to TheBlue, and the transaction is committed. The next batch runs DontLetYourMeatLoaf with XACT_ABORT OFF. The penultimate batch outputs the contents in the tables as well as @@trancount. There is a ROLLBACK in case the transaction is still open at this point. The last batch cleans up, so that we can start over.
Do you want make a guess what will happen in this case? This is the output:
Rebellion Band
------ ----------- -----------------------
PickMe 1 2009-01-01 00:00:00.000
PickMe 3 2012-12-20 00:00:00.000
PickMe 4 2014-04-14 14:14:14.000
PickMe 11 2012-09-01 00:00:00.000
Rebellion Band
------ ----------- -----------------------
PickMe 1 2009-01-01 00:00:00.000
PickMe 3 2012-12-20 00:00:00.000
PickMe 4 2014-04-14 14:14:14.000
PickMe 11 2012-09-01 00:00:00.000
PickMe 12 2012-09-09 00:00:00.000
PickMe 21 2012-09-01 00:00:00.000
Msg 241, Level 16, State 1, Procedure PickMeImClean, Line 6 [Batch Start Line 108]
Conversion failed when converting date and/or time from character string.
trancount Rebellion Band
----------- ----------- -----------------------
1 1 2009-01-01 00:00:00.000
1 3 2012-12-20 00:00:00.000
1 4 2014-04-14 14:14:14.000
1 11 2012-09-01 00:00:00.000
1 12 2012-09-09 00:00:00.000
(5 row(s) affected)
Light
----------------------------------------
Hehehehe!
As planned the second call to PickMeImClean caused an error. Like in the previous section, just before the error, rows 11, 12 and 21 are in the table. In final output, 21 is missing. However, the rows from the first, good, call are still there. Furthermore we can see that @@trancount is still 1. Thus, what happened was this:
You may recall that error 241 is batch- and transaction-aborting with traditional SQL. If you change the second call to read:
EXEC PickMeImClean 21, '201209', '31'
The error message changes to error 242. The conversion of a char data type to a datetime data type resulted in an out-of-range value, which is a statement-terminating error in traditional T‑SQL. But apart from the different error message, the output is exactly the same. To wit, when an error occurs in a Hekaton procedure the behaviour to the surrounding batch is always the one and the same: the error aborts the current batch, but any open transaction is not rolled back when XACT_ABORT is OFF with exception for the few dooming errors we looked at previously.
While I don't like the batch-abort without rollback because of the risk for orphaned transactions, I very much appreciate the consistency. And there is a simple way to avoid the orphaned transactions: run with SET XACT_ABORT ON. In this case, all Hekaton errors will abort the batch and roll back the transaction.
In the example above it would have been possible to continue to work in the transaction and finally commit it, albeit not in the same batch. But say that in DontLetYourMeatLoaf we would like to ignore the error from PickMeImClean as we did in ForTheYoungSophisticate in the previous section. Can we do this? Yes, but it is a little complicated, if we keep in mind that as a matter of routine we should run with SET XACT_ABORT ON in stored procedures written in traditional T‑SQL.
To be able to ignore the error from PickMeImClean we need to wrap the call in TRY-CATCH, but that is not sufficient, since in traditional T‑SQL, the transaction is always doomed when XACT_ABORT is ON. So we need to lower our guard inside the TRY block. Furthermore, we need to keep in mind that there are dooming Hekaton errors, and these cannot be ignored in the CATCH block. Here is a modified version of DontLetYourMeatLoaf:
CREATE PROCEDURE DontLetYourMeatLoaf AS SET XACT_ABORT ON BEGIN TRANSACTION INSERT TheBlue(Light) VALUES ('Hehehehe!') EXEC PickMeImClean 11, '201209', '09' BEGIN TRY SET XACT_ABORT OFF EXEC PickMeImClean 21, '201209', 'AB' SET XACT_ABORT ON END TRY BEGIN CATCH SET XACT_ABORT ON IF xact_state() <> -1 PRINT 'Ignored error: ' + error_message() ELSE BEGIN ROLLBACK TRANSACTION PRINT 'Transaction was doomed. Rolling back' ; THROW END END CATCH INSERT TheBlue(Light) VALUES ('Check out my band aid!') COMMIT TRANSACTION
Note here that I restore the setting for XACT_ABORT to ON, both in the TRY BLOCK and in the CATCH block to minimise the window where an attention signal could cause the batch to be aborted with the outer transaction active. You may also note that in this particular example I use ;THROW to reraise the error. That is only a matter of laziness. Since you need to run the examples in this chapter in a dedicated database, I wanted to save you from having to create error_handler_sp for this single example.
Insert this version into the script above and run it all. This is the output (ignoring the result sets from PickMeImClean this time):
Ignored error: Conversion failed when converting date and/or time from character string.
(1 row(s) affected)
trancount Rebellion Band
----------- ----------- -----------------------
0 1 2009-01-01 00:00:00.000
0 3 2012-12-20 00:00:00.000
0 4 2014-04-14 14:14:14.000
0 11 2012-09-01 00:00:00.000
0 12 2012-09-09 00:00:00.000
(5 row(s) affected)
Light
----------------------------------------
Hehehehe!
Check out my band aid!
This time the transaction ran to the end, and the last row in DontLetYourMeatLoaf was added to TheBlue. We can see that the error in PickMeImClean occurred but was ignored. No rows in the 20-range were inserted.
We have now looked at how error handling works with atomic blocks in natively compiled stored procedures and we have seen that error handling for these procedures has exactly the consistency that is lacking for traditional T‑SQL. I'm not suggesting that you should start using natively compiled stored procedures and memory-optimised tables only to be able to enjoy the improved error handling. No, you should only consider using In-Memory OLTP if you think you can gain performance from it and you are willing to accept all the restrictions that In-Memory OLTP imposes.
But I found it interesting to explore the behaviour, since it is so much better than the behaviour in traditional T‑SQL. Rather than writing my own chapter of how I would like to T‑SQL to work, I wrote this one.
This leads to the question: will we see ATOMIC blocks in traditional T‑SQL? If Microsoft would add atomic blocks to traditional stored procedures, they could do away with all the inconsistent behaviour of today, as there would be no backwards compatibility issues for procedures with atomic blocks. Particularly, errors would only have to be dooming if there is a good reason to. While it is an exciting idea, one has to realise there would be a lot of things that Microsoft would need to figure out. For instance, what if a new procedure with an atomic block calls an old procedure which uses BEGIN, COMMIT and ROLLBACK TRANSACTION? Another issue is that today savepoints are not permitted in a distributed transactions; what implication does that have for atomic blocks?
No, I don't have any Uservoice item asking for atomic blocks in traditional SQL. But so much is clear, if atomic blocks were added to traditional T‑SQL, this would mean a revolution at least as big as the introduction of TRY-CATCH was, and I would have to rework this suite of articles from start to end.
In this section I cover various anomalies that pertain to specific situations.
If you have a table variable with a column that has a default that refers to a non-existing sequence, the behaviour is ugly. Consider:
CREATE PROCEDURE DriveInRestaurant AS PRINT 'In Hollywood' DECLARE @Bongo TABLE (Fury int DEFAULT NEXT VALUE FOR no_such_sequence)
All you get when run this is Command(s) completed successfully. However, if you try to run the procedure, you see this:
Msg 2812, Level 16, State 62, Line 1
Could not find stored procedure 'DriveInRestaurant'.
Note: Sequences were added in SQL 2012, so this issue does not apply to SQL 2005 and SQL 2008.
Obviously this is bad. Say that the procedure is deployed to production, but for some reason the sequence is not there. There is no error, but in the middle of the night a batch-processing job dies because of the missing procedure.
There is a second way the error can strike: the sequence is dropped after the procedure has been created. In this case, the error occurs when the procedure is compiled on first execution. Again, no error is raised, and execution continues with the next statement:
SET XACT_ABORT OFF go CREATE SEQUENCE dropped_sequence go CREATE PROCEDURE DriveInRestaurant AS PRINT 'In Hollywood' DECLARE @Bongo TABLE (Fury int DEFAULT NEXT VALUE FOR dropped_sequence) go DROP SEQUENCE dropped_sequence go CREATE PROCEDURE DebraKadabra AS PRINT 'Calling DriveInRestaurant' DECLARE @ret int = 4711 EXEC @ret = DriveInRestaurant SELECT @ret as "@ret", @@error as "@@error" PRINT 'Call to DriveInRestaurant completed' go EXEC DebraKadabra
The output is:
Calling DriveInRestaurant
@ret @@error
----------- -----------
4711 0
(1 row(s) affected)
Call to DriveInRestaurant completed
The one way you can see that something went wrong is that the return value from the stored procedure is 4711. That is, the EXEC statement never assigned a value to @ret.
As repeatedly pointed out in this article, best practice is to run with SET XACT_ABORT ON. If you call DebraKadabra with this setting in force, you do get an error message, but not any one you would expect:
Calling DriveInRestaurant
Msg 0, Level 11, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
This error message comes from SSMS (or more precisely SqlClient) and it indicates an internal error in SQL Server or SqlClient. Since the severity is 11 and there is no stack dump in the SQL Server error log, the accident occurs in in the client API. However, there is all reason to suspect that SQL Server is the culprit by spewing out something on the output buffer which is not legal TDS. When I call DebraKadabra through a client uses OLE DB, I get a message saying Unspecified error. When I try a client that uses ODBC, the all I get is Calling DriveInRestaurant but no error message.
Another best practice is TRY-CATCH, which has a similar effect:
CREATE PROCEDURE MuffinMan AS SET XACT_ABORT OFF BEGIN TRY PRINT 'Calling DriveInRestaurant' EXEC DriveInRestaurant PRINT 'Call to DriveInRestaurant completed' END TRY BEGIN CATCH PRINT 'Caught error in MuffinMan.' END CATCH PRINT 'Exiting MuffinMan' go EXEC MuffinMan
This produces the same output as above. That is, with SSMS/SqlClient I get A severe error..., with OLE DB I get Unspecified error and with ODBC all I get is Calling DriveInRestaurant. So the funny effect is that while the CATCH block is not entered, the TRY block still serves to the terminate execution, even when XACT_ABORT is OFF. (However, if you have an open transaction when this happens, it is only rolled back when XACT_ABORT is ON.)
I should point out that this only occurs when the non-existing sequence is referred from a table variable; it does not happen if you have a temp table, or NEXT VALUE FOR is used in an expression. The root of the evil is deferred name resolution, which applies to sequences as well. That is, if you have:
CREATE PROCEDURE Sam AS PRINT 'BasketCase' SELECT NEXT VALUE FOR no_such_sequence go EXEC Sam go DROP PROCEDURE Sam
You get the output:
BasketCase
Msg 208, Level 16, State 1, Procedure Sam, Line 3
Invalid object name 'no_such_sequence'.
That is, the procedure starts executing and fails when the procedure reaches the bad statement. A table variable is declared and static and therefore no component of it can be subject to deferred name resolution. As we learnt in the section on BULK INSERT, the way that deferred name resolution works under the covers is that the exception is generated, but then suppressed. Apparently the error with the missing sequence in the table variable is suppressed in the same way, but this is incorrect this time.
This issue was originally discovered by wBob who was kind share his experience with me. wBob reported the issue on the now retired Connect site, but Microsoft closed it as Won't Fix with this comment: Hello, We investigated the problem and unfortunately the fix is not an easy one based on the current implementation. Given that the scenario is not common enough, I am closing the bug as "won't fix". I don't know about you, but the scenario that I make typos in my code is very common. The behaviour is the same in SQL 2017, so I guess we will have to live with this weirdness.
We learnt earlier that we cannot catch compilation errors like a missing table in the scope they occur. However, there is an odd exception that my fellow MVP Ola Hallengren discovered. Consider this script:
SET XACT_ABORT OFF go BEGIN TRY BEGIN TRANSACTION SELECT col FROM nosuchtable WHERE othercol = 12 COMMIT TRANSACTION END TRY BEGIN CATCH PRINT 'Caught error: ' + error_message() ROLLBACK TRANSACTION END CATCH go SELECT @@trancount AS trancount IF @@trancount > 0 ROLLBACK TRANSACTION
The output when you run this the first time is:
Msg 208, Level 16, State 1, Line 5
Invalid object name 'nosuchtable'.
trancount
-----------
1
Which is in line with what we have seen before. But run it a second time, and you are likely to see this:
Caught error: Invalid object name 'nosuchtable'.
trancount
-----------
0
This time the error is caught! It appears that despite the error, some sort of execution plan is put into cache and this plan includes an entry for SELECT statement despite the non-existance of nosuchtable. For this reason, this is not a compilation error, but a true run-time error. If you run DBCC FREEPROCCACHE to clear the cache and then run the above again, you will get the original output.
Here is an amusing variation. If you clear the cache again and change the script to start with SET XACT_ABORT ON, the error is not caught when you run it a second time. No matter how many times you run it, the CATCH handler is never invoked. But change it back to SET XACT_ABORT OFF without clearing the cache. On the first attempt, the error is still not caught. But on the second time, it is. And if you move back to SET XACT_ABORT ON, the script still catches the error.
I have only been able to reproduce this behaviour with an independent script; I don't see this behaviour with stored procedures. Also, I found when testing that I need to have the WHERE clause in place to see this behaviour. This may be due to that SQL Server does not cache plans for very trivial batches.
A final observation is that while TRY-CATCH was introduced in SQL 2005, I don't see this behaviour on that version, but the oldest version where I have been able to repro this is SQL 2008.
This is an issue is a plain bug that has been corrected, but it is the kind of bug that gives me nervous shivers. With a proper infrastructure for error handling, I like to think a bug like this one would be impossible.
Paul Bradshaw had a loop where he ran sp_refreshview over all his views to verify that they were correct and he used TRY-CATCH to trap the faulty ones. He had this loop working on SQL 2005, but he was puzzled to find that on SQL 2008 not all errors were caught. Binding errors due to things like dropped columns were caught. But he had views that had been created in SQL 2000 and that used the old-style outer-join operators *=
and =*
and they were not syntactically correct any longer. If he ran sp_refreshview on such a view without TRY-CATCH, he got an error message as expected. But when he wrapped the call to sp_refreshview in TRY-CATCH, there was nothing. No error message, and nothing caught in the CATCH handler.
To add insult to injury, he found that @@error as well as the return value from sp_refreshview were 0. When he asked about the issue in an SQL Server forum, I investigated the case, and I found that it was not one bug – but two. Without the source code to the engine, I could not figure out why TRY-CATCH did not work. However, I was able to spot why the return value was 0. The error message reported that the syntax error occurred in sp_refreshsqlmodule_internal. This procedure is in the resource database. This database is hidden for all users, as it is akin to a DLL. However, I stopped SQL Server and copied the files for this database to a different directory and attached this copy as a regular database. When I looked at the source code for sp_refreshsqlmodule_internal I found this piece of code:
-- REFRESH MODULE DEFINITION -- EXEC %%Module(ID = @objid).Refresh(NameSpaceClass = @ns) if @@error <> 0 begin COMMIT TRANSACTION return @@error end
Don't bother about the funny syntax in the EXEC statement. That is something special which only works in system procedures. But look at the statement in red. Quite shameful for Microsoft. Repeat after me: @@error is set after every statement.
I reported this as a bug and Microsoft corrected it in SQL 2012. The bug exists in both SQL 2005 and SQL 2008. However, in SQL 2005, the issue exists only with sp_refreshsqlmodule. (sp_refreshview was rewritten to use sp_refreshsqlmodule_internal in SQL 2008; on SQL 2005 it is said to be "server internal".)
Here is repro to demonstrate the issue. Because of the syntax and the compatibility level used, the repro runs only on SQL 2008 and SQL 2012. After creating the database and the view, there are two batches that runs sp_refreshview. In the first we look at the return value, and in the second we use TRY-CATCH:
SET XACT_ABORT OFF go CREATE DATABASE PurpleLagoon ALTER DATABASE PurpleLagoon SET COMPATIBILITY_LEVEL = 90 go USE PurpleLagoon go CREATE VIEW Approximate AS SELECT 1 AS Merge go ALTER DATABASE PurpleLagoon SET COMPATIBILITY_LEVEL = 100 go DECLARE @ret int = 99 PRINT 'Trying return value' EXEC @ret = sp_refreshview Approximate PRINT 'The return value is ' + ltrim(str(@ret)) go PRINT 'Using TRY-CATCH' BEGIN TRY EXEC sp_refreshview Approximate END TRY BEGIN CATCH PRINT 'Caught: ' + error_message() END CATCH go USE tempdb go DROP DATABASE PurpleLagoon
The output on SQL 2008 is:
Trying return value
Msg 156, Level 15, State 1, Procedure sp_refreshsqlmodule_internal, Line 75
Incorrect syntax near the keyword 'Merge'.
The return value is 0
Using TRY-CATCH
The return value is 0, and the error is not caught but swallowed entirely. It's better on SQL 2012:
Trying return value
Msg 156, Level 15, State 1, Procedure sp_refreshsqlmodule_internal, Line 71
Incorrect syntax near the keyword 'Merge'.
Using TRY-CATCH
Caught: Incorrect syntax near the keyword 'Merge'.
There is no trace of the return value, because not only did they make the error catchable, they also made it batch-aborting. We can also see that the CATCH handler was invoked.
This is an issue that Jorge Inverso ran into. He had this script:
DECLARE @SearchString varchar(1000) SELECT @SearchString = '"Introduction" OR "Repair and Service' BEGIN TRY SELECT dfp.keyword, LEFT (dfp.special_term, 200) FROM sys.dm_fts_parser (@SearchString, 1033, NULL ,0) dfp PRINT 'This should not print' END TRY BEGIN CATCH PRINT error_message() END CATCH
The output from this script is This should not print. But if you remove the TRY-CATCH block and run the SELECT statement, you get this error:
Msg 7630, Level 15, State 2, Line 6
Syntax error near '"' in the full-text search condition '"Introduction" OR "Repair and Service'.
Which is to be expected, since the search string is incorrect. That is, when you have sys.dm_fts_parser in TRY-CATCH, errors are dropped on the floor and not raised. Interesting enough, in SQL 2008 RTM, the CATCH section is invoked. This incorrect behaviour first appeared in SQL 2008 SP1, and it still exists in SQL 2012 SP2 and SQL 2014 RTM. (sys.dm_fts_parser is a new DMV in SQL 2008, so this bug is does not apply to SQL 2005.)
Jorge reported this bug in on the old Connect site. Microsoft replied in 2010 they would fix it in SQL 2012, but as mentioned this has not happened, and the bug was closed as Won't Fix and not brought over to the new feedback site on Uservoice.
This is an issue I was made aware of by someone posting as cscode on the MSDN forums. The original thread is here. Consider the script below. Note that since it creates a full-text catalogue, it does not run in tempdb.
SET XACT_ABORT OFF go CREATE TABLE Billy (The int NOT NULL, Mountain varchar(32) NULL) CREATE UNIQUE INDEX Ethell ON Billy(The); CREATE FULLTEXT CATALOG GrowingOffOfHisShoulder AS DEFAULT; CREATE FULLTEXT INDEX ON Billy(Mountain) KEY INDEX Ethell; go CREATE PROCEDURE StudebakerHoch AS DECLARE @x varchar (20) SELECT @x = '''super hero''' BEGIN TRY PRINT 'Hello!' -- SELECT The FROM Billy WHERE CONTAINS(Mountain, @x) SELECT The FROM Billy WHERE CONTAINS(Mountain, 'beef pies') PRINT 'This should not print' END TRY BEGIN CATCH PRINT 'Caught: ' + error_message() END CATCH go EXEC StudebakerHoch go DROP PROCEDURE StudebakerHoch DROP TABLE Billy DROP FULLTEXT CATALOG GrowingOffOfHisShoulder
The output is on SQL 2005 is
Hello!
The
-----------
(0 row(s) affected)
Caught: Syntax error near 'pies' in the full-text search condition 'beef pies'.
Warning: The fulltext catalog 'GrowingOffOfHisShoulder' is being dropped and is currently set as default.
This is a fairly reasonable output from the script. There is a syntax error in a constant in the script, but it is not detected until run-time. Fair enough. In SQL 2008, Microsoft made major changes to full-text and made it a first-class citizen in the engine. Largely to the better, but the output of the script in SQL 2008 or later is:
Msg 7630, Level 15, State 3, Procedure StudebakerHoch, Line 7
Syntax error near 'pies' in the full-text search condition 'beef pies'.
Warning: The fulltext catalog 'GrowingOffOfHisShoulder' is being dropped and is currently set as default.
That is, as in SQL 2005, the procedure is created. But when the optimiser builds a plan for the procedure, the syntax error is detected. We can tell this from the fact that Hello! is never printed. Obviously you cannot catch this error within the procedure, since it never starts executing. As this seems like the funny occasion to trap the error – either SQL Server should not permit me to create the procedure, or not bomb until I reach the statement – I once a filed a bug for this but Microsoft closed as Won't Fix.
There is more to it. Move the comment characters so that the section reads:
SELECT The FROM Billy WHERE CONTAINS(Mountain, @x) -- SELECT The FROM Billy WHERE CONTAINS(Mountain, 'beef pies')
Now the argument to CONTAINS() is not set until run-time. On SQL 2008, the output is:
Hello!
Msg 7630, Level 15, State 3, Procedure StudebakerHoch, Line 6
Syntax error near 'hero'' in the full-text search condition ''super hero''.
Warning: The fulltext catalog 'GrowingOffOfHisShoulder' is being dropped and is currently set as default.
This time we enter the procedure and the TRY block. Nevertheless, the error
is not caught, but execution of the procedure is terminated directly. If we put EXEC StudebakerHoch
in a TRY block the error is caught, so this
condition is handled like a compilation error. Which is quite crazy, since it
depends on a run-time value. For this behaviour, I submitted a separate bug and lo and behold: Microsoft fixed this bug in SQL 2012.
I wanted to create an endpoint for Service Broker programmatically. The TCP port conventionally used for Service Broker is 4022, but this port could be in use, for instance by another SQL Server instance on the same machine. I found when testing on SQL 2012 that if I created an endpoint on a port already in use, I got error 9692, The Service Broker endpoint cannot listen on port 4022 because it is in use by another process. My plan was to trap this error with TRY-CATCH, and then try port 4023 and so on. However, it proved not to be that simple, because error 9692 is one of these uncatchable errors.
Here is a repro. To run it, you need an instance which does not already have a Service Broker endpoint. (sys.service_broker_endpoints should be empty). You need to find a port that is already in use on your machine. If you have two instances, you can create a Service Broker endpoint on the other instance for port 4022 and use the script as-is. Else find the ports in use by running netstat -a
from the command line.
CREATE ENDPOINT SSBEndpoint STATE = STARTED AS TCP (LISTENER_PORT = 4022, LISTENER_IP = ALL) FOR SERVICE_BROKER (AUTHENTICATION = WINDOWS) PRINT 'The error is ' + ltrim(str(@@error)) + '.' go DROP ENDPOINT SSBEndpoint go BEGIN TRY CREATE ENDPOINT SSBEndpoint STATE = STARTED AS TCP (LISTENER_PORT = 4022, LISTENER_IP = ALL) FOR SERVICE_BROKER (AUTHENTICATION = WINDOWS) PRINT 'The error is ' + ltrim(str(@@error)) + '.' END TRY BEGIN CATCH PRINT 'We do not come here' END CATCH go DROP ENDPOINT SSBEndpoint
The output on SQL 2012 and SQL 2014 is:
Msg 9692, Level 16, State 1, Line 1
The Service Broker endpoint cannot listen on port 4022 because it is in use by another process.
The error is 9692.
The error is 0.
In the first batch, we try to create the endpoint without TRY-CATCH, and we can see that @@error is set to 9692. But in the second batch, where the endpoint creation is inside TRY-CATCH, the error is suppressed entirely: 1) it is not caught. 2) it is not sent to the client. 3) @@error is set to 0.
I was able to achieve what I wanted, because I found out a workaround for the last point. I found that I could wrap the CREATE ENDPOINT statement in dynamic SQL and copy the value of @@error in the dynamic SQL to an output parameter. (And I needed to use dynamic SQL anyway to be able to try different port numbers.)
DECLARE @sql nvarchar(MAX), @err int BEGIN TRY SELECT @sql = 'CREATE ENDPOINT SSBEndpoint STATE = STARTED AS TCP (LISTENER_PORT = 4022, LISTENER_IP = ALL) FOR SERVICE_BROKER (AUTHENTICATION = WINDOWS) SELECT @err = @@error' EXEC sp_executesql @sql, N'@err int OUTPUT', @err OUTPUT PRINT '@@error=' + ltrim(str(@@error)) + ', @err=' + ltrim(str(@err)) + '.' END TRY BEGIN CATCH PRINT 'We do not come here' END CATCH go DROP ENDPOINT SSBEndpoint
The output is:
@@error=0, @err=9692.
I still have TRY-CATCH here, since if someone meddles with the code, there could one day be a syntax error in the dynamic SQL.
This applies to SQL 2012 and later. On SQL 2005 and SQL 2008, the situation is even bleaker. On these versions you see no error message at all, and @@error is always set to 0. The only place where you can see that it did not work out is in the SQL Server error log. On SQL 2012 and 2014, the error is only raised for Service Broker endpoints. It is not raised if you create a TSQL endpoint on a port that is in use.
While this may make you feel uncomfortable, there is actually some logic here. The endpoint is after all created in the system catalogue whereupon a background thread attempts to listen to this port. This may fail, but since this depends on which ports that are use right now, it could work tomorrow. And, maybe more importantly, just because SQL Server was able to listen to the port today, that does not mean there could not be a clash further on. So you could say that as long as the port is entered in the system catalogue, the command has succeeded.
Furthermore, endpoints are nothing you create very often, so you could accept that there is no error raised, but that you need to check the SQL Server error log. And this is OK, as long as you do it manually, which typically would be the case with a TSQL endpoint. However, if you are developing an install script for an application that uses Service Broker, you may want to create the Service Broker endpoint in that script and trap that the chosen port is in use in order to try another one. Apparently, I was not the first one to do this, and someone before me had yelled high enough for Microsoft to make sure that CREATE ENDPOINT raises an error if the port is in use. But I see little reason why they would use a non-standard mechanism, and therefore I submitted a bug which Microsoft promptly closed it as By Design. As if I did not already know that Microsoft wants error handling to be difficult.
SQL Server still ships with quite a few extended stored procedures, whereof some are documented and others are only intended for the tools that ship with SQL Server. With regards to error handling, all bets are off with these guys. There is no consistent behaviour, and about every XP has its own twist. Here is a brief exposé.
The most popular XP is certainly xp_cmdshell. If an error occurs when running the operating-system command, this does not raise an error in T‑SQL, but you can check the return code from xp_cmdshell or trap the output in a one-column table with INSERT-EXEC.
Some of them can raise an error, but the error is not catchable. For instance:
BEGIN TRY EXEC xp_regread 'SSDDFG' PRINT 'We are here' END TRY BEGIN CATCH SELECT error_message() END CATCH
The output is
Msg 22001, Level 15, State 0, Line 23
Error executing Read extended stored procedure: Invalid Parameter
We are here
That is, the CATCH handler was ignored. Another one that behaves the same is xp_delete_file.
Some just print an informational message. Here are two examples:
EXEC xp_loginconfig 'maudit level' DECLARE @filename varchar (20), @message varchar (20); EXEC xp_sscanf 'sync -b -fproducts10.tmp -rrandom', 'sync -b -f%d -r%s', @filename OUTPUT, @message OUTPUT;
The output is:
Error executing extended stored procedure: Invalid parameter value.
Error executing extended stored procedure: Invalid Parameter.
Msg 50002, Level 1, State 0
That is, xp_loginconfig produced a level 0 message, whereas xp_sscanf raises the stakes somewhat by using severity level 1.
Here is one which actually works properly:
BEGIN TRY EXEC xp_revokelogin 'nosuchuser' PRINT 'Not here' END TRY BEGIN CATCH PRINT 'Error was trapped' END CATCH
And here is one where things go really wrong:
EXEC xp_enumgroups 'gibberish'
I get this output in SSMS 16.5:
'gibberish' is not a valid account name.
Msg 0, Level 11, State 0, Line 31
A severe error occurred on the current command. The results, if any, should be discarded.
There is no dump in the SQL Server error log, so supposedly some error occurs on the SSMS side, which may be due to bad TDS from SQL Server. Interesting enough, I don't seem to get any problems with SQLCMD.
As for what may happen with user-written extended stored procedures, this is nothing I have investigated. Writing your own extended stored procedures is deprecated and definitely nothing you should engage in.
Here is a situation which is just too amusing not to be shared, although there is no real cause for concern. Consider this batch:
DECLARE @p1 int, @p2 int, @p7 int EXEC sp_cursorprepexec @p1 OUTPUT, @p2 OUTPUT, NULL, N'PRINT ''Magdalena''', 4112, 8193, @p7 OUTPUT
The output is:
Executing SQL directly; no cursor.
Magdalena
Everything seems to go just fine. But look what happens if we wrap it in TRY-CATCH:
BEGIN TRY DECLARE @p1 int, @p2 int, @p7 int EXEC sp_cursorprepexec @p1 OUTPUT, @p2 OUTPUT, NULL, N'PRINT ''Magdalena''', 4112, 8193, @p7 OUTPUT END TRY BEGIN CATCH ; THROW END CATCH
Now we see this:
Msg 156, Level 15, State 1, Line 1
Incorrect syntax near the keyword 'PRINT'.
All of a sudden there is an error message! Isn't that amazing?
What happens here is that sp_cursorprepexec attempts to slap a DECLARE CURSOR statement around the T‑SQL batch it was fed. When this generates a syntax error, it falls back to running the batch directly (whence the message Executing SQL directly; no cursor.) You can see the syntax error if you trace execution with Profiler and incldue the event Error:Exception. The error is suppressed, but for whatever reason it floats to the surface when you wrap the call in TRY-CATCH.
While this certainly is crazy, this is more amusing than a real issue. The procedure sp_cursorprepexec exists for the benefit for various client APIs which always call it through RPC. You would never call this procedure on your own SQL code, so you would not wrap it in TRY-CATCH for real.
This is the end of part Two of this series. I encourage you to continue with Part Three, although you have my understanding if you feel exhausted at this point. But there is a day tomorrow too. Or next week. :-) In Part Three you will learn about the implementation of SqlEventLog, and you will see several examples of how to implement error handling.
If you have questions, comments or suggestions specific to this article, please feel free to contact me at [email protected]. This includes small things like spelling errors, bad grammar, errors in code samples etc. Since I don't have a publisher, I need to trust my readership to be my tech editors and proof-readers. :-) If you have questions specific to a problem you are working with, I recommend that you ask in a public forum, as this is more likely to give you a quick response.
For a list of acknowledgements, please see the end of Part Three. Below follows a revision history for Part Two.
Nick Reilingh made me aware of that some errors in the category Terminates Nothing at All doom the transaction even when XACT_ABORT is OFF. I have updated this section to reflect this.
Because the old Connect site have been retired, I have removed all links to Connect. For items transferred to the new Uservoice site, I have replaced the link. (As I understand it, only items that were active when Connect closed down has been moved.)
I have also updated the status for the bug mentioned in the section Nesting Natively Compiled Stored Procedures; this bug has been fixed in SQL 2017.