Is a single SQL Server statement atomic and consistent?

Solution 1:

I've been operating under the assumption that a single statement in SQL Server is consistent

That assumption is wrong. The following two transactions have identical locking semantics:

STATEMENT

BEGIN TRAN; STATEMENT; COMMIT

No difference at all. Single statements and auto-commits do not change anything.

So merging all logic into one statement does not help (if it does, it was by accident because the plan changed).

Let's fix the problem at hand. SERIALIZABLE will fix the inconsistency you are seeing because it guarantees that your transactions behave as if they executed single-threadedly. Equivalently, they behave as if they executed instantly.

You will be getting deadlocks. If you are ok with a retry loop, you're done at this point.

If you want to invest more time, apply locking hints to force exclusive access to the relevant data:

UPDATE Gifts  -- U-locked anyway
SET GivenAway = 1
WHERE GiftID = (
   SELECT TOP 1 GiftID
   FROM Gifts WITH (UPDLOCK, HOLDLOCK) --this normally just S-locks.
   WHERE g2.GivenAway = 0
    AND (SELECT COUNT(*) FROM Gifts g2 WITH (UPDLOCK, HOLDLOCK) WHERE g2.GivenAway = 1) < 5
   ORDER BY g2.GiftValue DESC
)

You will now see reduced concurrency. That might be totally fine depending on your load.

The very nature of your problem makes achieving concurrency hard. If you require a solution for that we'd need to apply more invasive techniques.

You can simplify the UPDATE a bit:

WITH g AS (
   SELECT TOP 1 Gifts.*
   FROM Gifts
   WHERE g2.GivenAway = 0
    AND (SELECT COUNT(*) FROM Gifts g2 WITH (UPDLOCK, HOLDLOCK) WHERE g2.GivenAway = 1) < 5
   ORDER BY g2.GiftValue DESC
)
UPDATE g  -- U-locked anyway
SET GivenAway = 1

This gets rid of one unnecessary join.

Solution 2:

Below is an example of an UPDATE statement that does increment a counter value atomically

-- Do this once for test setup
CREATE TABLE CountingNumbers (Value int PRIMARY KEY NOT NULL)
INSERT INTO CountingNumbers VALUES(1) 

-- Run this in parallel: start it in two tabs on SQL Server Management Studio
-- You will see each connection generating new numbers without duplicates and without timeouts
while (1=1)
BEGIN
  declare @nextNumber int
  -- Taking the Update lock is only relevant in case this statement is part of a larger transaction
  -- to prevent deadlock
  -- When executing without a transaction, the statement will itself be atomic
  UPDATE CountingNumbers WITH (UPDLOCK, ROWLOCK) SET @nextNumber=Value=Value+1
  print @nextNumber
END

Solution 3:

Select does not lock exclusively, even serializable does, but only for the time the select is executed! Once the select is over, the select lock is gone. Then, update locks take on as they now know what to lock as Select has return results. Meanwhile, anyone else can Select again!

The only sure way to safely read and lock a row is:

begin transaction

--lock what i need to read
update mytable set col1=col1 where mykey=@key

--now read what i need
select @d1=col1,@d2=col2 from mytable where mykey=@key

--now do here calculations checks whatever i need from the row i read to decide my update
if @d1<@d2 set @d1=@d2 else set @d1=@d2 * 2 --just an example calc

--now do the actual update on what i read and the logic
update mytable set col1=@d1,col2=@d2 where mykey=@key

commit transaction

This way any other connection running the same statement for the same data it will surely wait at the first (fake) update statement until the previous is done. This ensures that when lock is released only one connection will granted permission to lock request to 'update' and this one will surely read committed finalized data to make calculations and decide if and what to actually update at the second 'real' update.

In other words, when you need to select information to decide if/how to update, you need a begin/commit transaction block plus you need to start with a fake update of what you need to select - before you select it(update output will also do).