.NET sockets vs C++ sockets at high performance

would anybody know a good article/reference explaining how .NET sockets use I/O completion ports under the hood?

I suspect the only reference would be the implementation (ie. Reflector or other assembly de-compiler). With that you will find that all asynchronous IO goes through an IO Completion Port with call backs being processed in the IO-thread pool (which is separate to the normal thread pool).

use 5 completion ports

I would expect to use a single completion port processing all the IO into a single pool of threads with one thread per pool servicing completions (assuming you are doing any other IO, including disk, asynchronously as well).

Multiple completion ports would make sense if you have some form of prioritisation going on.

My question is this: should I expect the same performance using C++ sockets and C# sockets?

Yes or no, depending on how narrowly you define the "using ... sockets" part. In terms of the operations from the start of the asynchronous operation until the completion is posted to the completion port I would expect no significant difference (all the processing is in the Win32 API or Windows kernel).

However the safety that the .NET runtime provides will add some overhead. Eg. buffer lengths will be checked, delegates validated etc. If the limit on the application is CPU then this is likely to make a difference, and at the extreme a small difference can easily add up.

Also the .NET version will occasionally pause for GC (.NET 4.5 does asynchronous collection, so this will get better in the future). There are techniques to minimise garbage accumulating (eg. reuse objects rather than creating them, make use of structures while avoiding boxing).

In the end, if the C++ version works and is meeting your performance needs, why port?


You can't do a straight port of the code from C++ to C# and expect the same performance. .NET does a lot more than C++ when it comes to memory management (GC) and making sure that your code is safe (boundary checks etc).

I would allocate one large buffer for all IO operations (for instance 65535 x 500 = 32767500 bytes) and then assign a chunk to each SocketAsyncEventArgs (and for send operations). Memory is cheaper than CPU. Use a buffer manager / factory to provide chunks for all connections and IO operations (Flyweight pattern). Microsoft does this in their Async example.

Both Begin/End and Async methods uses IO completion ports in the background. The latter doesn't need to allocate objects for each operation which boosts performance.