How to give priority to privileged thread in mutex locking?
Solution 1:
I can think of three methods using only threading primitives:
Triple mutex
Three mutexes would work here:
- data mutex ('M')
- next-to-access mutex ('N'), and
- low-priority access mutex ('L')
Access patterns are:
- Low-priority threads: lock L, lock N, lock M, unlock N, { do stuff }, unlock M, unlock L
- High-priority thread: lock N, lock M, unlock N, { do stuff }, unlock M
That way the access to the data is protected, and the high-priority thread can get ahead of the low-priority threads in access to it.
Mutex, condition variable, atomic flag
The primitive way to do this is with a condition variable and an atomic:
- Mutex M;
- Condvar C;
- atomic bool hpt_waiting;
Data access patterns:
- Low-priority thread: lock M, while (hpt_waiting) wait C on M, { do stuff }, broadcast C, unlock M
- High-priority thread: hpt_waiting := true, lock M, hpt_waiting := false, { do stuff }, broadcast C, unlock M
Mutex, condition variable, two non-atomic flag
Alternatively you can use two non-atomic bools with a condvar; in this technique the mutex/condvar protects the flags, and the data is protected not by a mutex but by a flag:
-
Mutex M;
-
Condvar C;
-
bool data_held, hpt_waiting;
-
Low-priority thread: lock M, while (hpt_waiting or data_held) wait C on M, data_held := true, unlock M, { do stuff }, lock M, data_held := false, broadcast C, unlock M
-
High-priority thread: lock M, hpt_waiting := true, while (data_held) wait C on M, data_held := true, unlock M, { do stuff }, lock M, data_held := false, hpt_waiting := false, broadcast C, unlock M
Solution 2:
Put requesting threads on a 'priority queue'. The privileged thread can get first go at the data when it's free.
One way to do this would be withan array of ConcurrentQueues[privilegeLevel], a lock and some events.
Any thread that wants at the data enters the lock. If the data is free, (boolean), it gets the data object and exits the lock. If the data is in use by another thread, the requesting thread pushes an event onto one of the concurrent queues, depending on its privilege level, exits the lock and waits on the event.
When a thread wants to release its ownership of the data object, it gets the lock and iterates the array of ConcurrentQueues from the highest-privilege end down, looking for an event, (ie queue count>0). If it finds one, it signals it and exits the lock, if not, it sets the 'dataFree' boolean and and exits the lock.
When a thread waiting on an event for access to the data is made ready, it may access the data object.
I thnk that should work. Please, other developers, check this design and see if you can think of any races etc? I'm still suffering somewhat from 'hospitality overload' after a trip to CZ..
Edit - probably don't even need concurrent queues because of the explicit lock across them all. Any old queue would do.