Select unlocked row in Postgresql
Is there a way to select rows in Postgresql that aren't locked? I have a multi-threaded app that will do:
Select... order by id desc limit 1 for update
on a table.
If multiple threads run this query, they both try to pull back the same row.
One gets the row lock, the other blocks and then fails after the first one updates the row. What I'd really like is for the second thread to get the first row that matches the WHERE
clause and isn't already locked.
To clarify, I want each thread to immediately update the first available row after doing the select.
So if there are rows with ID: 1,2,3,4
, the first thread would come in, select the row with ID=4
and immediately update it.
If during that transaction a second thread comes it, I'd like it to get row with ID=3
and immediately update that row.
For Share won't accomplish this nor with nowait
as the WHERE
clause will match the locked row (ID=4 in my example)
. Basically what I'd like is something like "AND NOT LOCKED" in the WHERE
clause.
Users
-----------------------------------------
ID | Name | flags
-----------------------------------------
1 | bob | 0
2 | fred | 1
3 | tom | 0
4 | ed | 0
If the query is "Select ID from users where flags = 0 order by ID desc limit 1
" and when a row is returned the next thing is "Update Users set flags = 1 where ID = 0
" then I'd like the first thread in to grab the row with ID 4
and the next one in to grab the row with ID 3
.
If I append "For Update
" to the select then the first thread gets the row, the second one blocks and then returns nothing because once the first transaction commits the WHERE
clause is no longer satisfied.
If I don't use "For Update
" then I need to add a WHERE clause on the subsequent update (WHERE flags = 0) so only one thread can update the row.
The second thread will select the same row as the first but the second thread's update will fail.
Either way the second thread fails to get a row and update because I can't get the database to give row 4 to the first thread and row 3 to the second thread the the transactions overlap.
Solution 1:
This feature, SELECT ... SKIP LOCKED
is being implemented in Postgres 9.5. http://www.depesz.com/2014/10/10/waiting-for-9-5-implement-skip-locked-for-row-level-locks/
Solution 2:
No No NOOO :-)
I know what the author means. I have a similar situation and i came up with a nice solution. First i will start from describing my situation. I have a table i which i store messages that have to be sent at a specific time. PG doesn't support timing execution of functions so we have to use daemons (or cron). I use a custom written script that opens several parallel processes. Every process selects a set of messages that have to be sent with the precision of +1 sec / -1 sec. The table itself is dynamically updated with new messages.
So every process needs to download a set of rows. This set of rows cannot be downloaded by the other process because it will make a lot of mess (some people would receive couple messages when they should receive only one). That is why we need to lock the rows. The query to download a set of messages with the lock:
FOR messages in select * from public.messages where sendTime >= CURRENT_TIMESTAMP - '1 SECOND'::INTERVAL AND sendTime <= CURRENT_TIMESTAMP + '1 SECOND'::INTERVAL AND sent is FALSE FOR UPDATE LOOP
-- DO SMTH
END LOOP;
a process with this query is started every 0.5 sec. So this will result in the next query waiting for the first lock to unlock the rows. This approach creates enormous delays. Even when we use NOWAIT the query will result in a Exception which we don't want because there might be new messages in the table that have to be sent. If use simply FOR SHARE the query will execute properly but still it will take a lot of time creating huge delays.
In order to make it work we do a little magic:
-
changing the query:
FOR messages in select * from public.messages where sendTime >= CURRENT_TIMESTAMP - '1 SECOND'::INTERVAL AND sendTime <= CURRENT_TIMESTAMP + '1 SECOND'::INTERVAL AND sent is FALSE AND is_locked(msg_id) IS FALSE FOR SHARE LOOP -- DO SMTH END LOOP;
-
the mysterious function 'is_locked(msg_id)' looks like this:
CREATE OR REPLACE FUNCTION is_locked(integer) RETURNS BOOLEAN AS $$ DECLARE id integer; checkout_id integer; is_it boolean; BEGIN checkout_id := $1; is_it := FALSE; BEGIN -- we use FOR UPDATE to attempt a lock and NOWAIT to get the error immediately id := msg_id FROM public.messages WHERE msg_id = checkout_id FOR UPDATE NOWAIT; EXCEPTION WHEN lock_not_available THEN is_it := TRUE; END; RETURN is_it; END; $$ LANGUAGE 'plpgsql' VOLATILE COST 100;
Of course we can customize this function to work on any table you have in your database. In my opinion it is better to create one check function for one table. Adding more things to this function can make it only slower. I takes longer to check this clause anyways so there is no need to make it even slower. For me this the complete solution and it works perfectly.
Now when i have my 50 processes running in parallel every process has a unique set of fresh messages to send. Once the are sent i just update the row with sent = TRUE and never go back to it again.
I hope this solution will also work for you (author). If you have any question just let me know :-)
Oh, and let me know if this worked for you as-well.
Solution 3:
I use something like this:
select *
into l_sms
from sms
where prefix_id = l_prefix_id
and invoice_id is null
and pg_try_advisory_lock(sms_id)
order by suffix
limit 1;
and don't forget to call pg_advisory_unlock
Solution 4:
If you are trying to implement a queue, take a look at PGQ, which has solved this and other problems already. http://wiki.postgresql.org/wiki/PGQ_Tutorial
Solution 5:
It appears that you are trying to do something like grab the highest priority item in a queue that is not already being taken care of by another process.
A likely solution is to add a where clause limiting it to unhandled requests:
select * from queue where flag=0 order by id desc for update;
update queue set flag=1 where id=:id;
--if you really want the lock:
select * from queue where id=:id for update;
...
Hopefully, the second transaction will block while the update to the flag happens, then it will be able to continue, but the flag will limit it to the next in line.
It is also likely that using the serializable isolation level, you can get the result you want without all of this insanity.
Depending on the nature of your application, there may be better ways of implementing this than in the database, such as a FIFO or LIFO pipe. Additionally, it may be possible to reverse the order that you need them in, and use a sequence to ensure that they are processed sequentially.