Explain "Leader/Follower" Pattern
Solution 1:
As you might have read, the pattern consists of 4 components: ThreadPool, HandleSet, Handle, ConcreteEventHandler (implements the EventHandler interface).
You can think of it as a taxi station at night, where all the drivers are sleeping except for one, the leader. The ThreadPool is a station managing many threads - cabs.
The leader is waiting for an IO event on the HandleSet, like how a driver waits for a client.
When a client arrives (in the form of a Handle identifying the IO event), the leader driver wakes up another driver to be the next leader and serves the request from his passenger.
While he is taking the client to the given address (calling ConcreteEventHandler and handing over Handle to it) the next leader can concurrently serve another passenger.
When a driver finishes he take his taxi back to the station and falls asleep if the station is not empty. Otherwise he become the leader.
The pros for this pattern are:
- no communication between the threads are necessary, no synchronization, nor shared memory (no locks, mutexes) are needed.
- more ConcreteEventHandlers can be added without affecting any other EventHandler
- minimizes the latency because of the multiple threads
The cons are:
- complex
- network IO can be a bottleneck
Solution 2:
I want to add to Jake's answer by linking another PDF from the same author that details a use case where they chose the Leader/Follower pattern over other alternatives: http://www.dre.vanderbilt.edu/~schmidt/PDF/OM-01.pdf
Solution 3:
Most people are familiar with the classical pattern of using one thread per request. This can be illustrated as in the following diagram. The server maintains a thread pool and every incoming request is assigned a thread that will be responsible to process it.
In the leader/follower pattern, one of these threads is designated as the current leader. This thread is responsible for listening to multiple connections. When a request comes in:
- first, the thread notifies the next (follower) thread in the queue, which becomes the new leader and starts listening for new requests
- then, the thread proceeds with processing the request that was received previously.
The following diagram illustrates how that works at a high level.
Compared to the one-thread-per-request pattern, the leader/follower pattern makes more efficient use of resources, especially in scenarios with a big number of concurrent connections. This is due to the fact that the one-thread-per-request would require a separate thread per connection that is costly, which would make it infeasible for some situations with limited resources.
The following paper contains a more thorough analysis of the pattern and advantages/disadvantages when compared with other techniques: http://www.kircher-schwanninger.de/michael/publications/lf.pdf