Is Kestrel using a single thread for processing requests like Node.js?
Both Kestrel and Node.js are based on libuv.
While Node.js exactly states that it uses an event loop, I can't seem to find if such is the case for Kestrel, or if it utilizes thread pooling / request queue like IIS?
Kestrel behind a web server
Node.js event loop
┌───────────────────────┐
┌─>│ timers │
│ └──────────┬────────────┘
│ ┌──────────┴────────────┐
│ │ I/O callbacks │
│ └──────────┬────────────┘
│ ┌──────────┴────────────┐
│ │ idle, prepare │
│ └──────────┬────────────┘ ┌───────────────┐
│ ┌──────────┴────────────┐ │ incoming: │
│ │ poll │<─────┤ connections, │
│ └──────────┬────────────┘ │ data, etc. │
│ ┌──────────┴────────────┐ └───────────────┘
│ │ check │
│ └──────────┬────────────┘
│ ┌──────────┴────────────┐
└──┤ close callbacks │
└───────────────────────┘
Updated for ASP.Net Core 2.0. As pointed by poke, the server has been split between hosting and transport, where libuv belongs to the transport layer. The libuv ThreadCount
has been moved to its own LibuvTransportOptions
and they are set separately in your web host builder with the UseLibuv()
ext method:
-
If you check the
LibuvTransportOptions
class in github, you will see aThreadCount
option:/// <summary> /// The number of libuv I/O threads used to process requests. /// </summary> /// <remarks> /// Defaults to half of <see cref="Environment.ProcessorCount" /> rounded down and clamped between 1 and 16. /// </remarks> public int ThreadCount { get; set; } = ProcessorThreadCount;
-
The option can be set in the call to
UseLibuv
, in your web host builder. For example:public static IWebHost BuildWebHost(string[] args) => WebHost.CreateDefaultBuilder(args) .UseLibuv(opts => opts.ThreadCount = 4) .UseStartup<Startup>() .Build();
While in ASP.NET Core 1.X, Libuv config was part of the kestrel server:
-
If you check the
KestrelServerOptions
class in its github repo, you will see there is aThreadCount
option:/// <summary> /// The number of libuv I/O threads used to process requests. /// </summary> /// <remarks> /// Defaults to half of <see cref="Environment.ProcessorCount" /> rounded down and clamped between 1 and 16. /// </remarks> public int ThreadCount { get; set; } = ProcessorThreadCount;
-
The option can be set in the call to
UseKestrel
, for example in a new ASP.Net Core app:public static void Main(string[] args) { var host = new WebHostBuilder() .UseKestrel(opts => opts.ThreadCount = 4) .UseContentRoot(Directory.GetCurrentDirectory()) .UseIISIntegration() .UseStartup<Startup>() .Build(); host.Run(); }
Digging through the source code:
- You can see the libuv listener threads (or
KestrelThreads
) being created in theKestrelEngine
- Some places will call the
ThreadPool
methods so they can run code in the CLR Thread Pool instead of the libuv threads. (UsingThreadPool.QueueUserWorkItem
). The pool seems to be defaulted with a max of 32K threads which can be modified via config. - The
Frame<TContext>
delegates to the actual application (like an ASP.Net Core application) for handling the request.
So we could say it uses multiple libuv eventloops for IO. The actual work is done on managed code with standard worker threads, using the CLR thread pool.
I would love to find more authoritative documentation about this (The official docs don't give much detail). The best one I have found is Damian Edwards talking about Kestrel on channel 9. Around minute 12 he explains:
- libuv uses a single threaded event loop model
- Kestrel supports multiple event loops
- Kestrel does only IO work on the libuv event loops
- All non IO work (including anything related with HTTP like parsing, framing, etc) is done in managed code on standard .net worker threads.
Additionally, a quick search has returned:
- David Fowler talking about thread pooling in Kestrel here. It also confirms that a request might still jump between threads in ASP.Net Core. (as it was in previous versions)
- This blogpost looking at Kestrel when it came out
- This question about how threads are managed in ASP.Net Core.
Threading is transport specific. With the libuv transport (the default in 2.0) as stated in Daniel J.G.'s answer there's a number of event loops based on the number of logical processors on the machine and that's overridable by setting the value on the options. By default each connection is bound to a particular thread and all IO operations take place on that thread. User code is executed on thread pool threads because we don't trust that users won't block IO threads. When you make IO calls on these thread pool threads (i.e. HttpResponse.WriteAsync
), kestrel does the work to marshal that back to the appropriate IO thread the socket was bound to. A typical request flow looks like this:
[ read from networking ] dispatch to thread pool -> [ http parsing ], [ execute middleware pipeline ] call to write -> enqueue user work to the IO thread [ write to network ]
Of course you can always tell kestrel you are a pro and will never block the IO thread and run your code on it. But I wouldn't unless I knew what I was doing (and I don't :D).