Is there a technical reason no 10GbE USB 3.1 Gen 2 adapters exist yet?

For example, it is practically impossible to create FireWire adapters over USB because FireWire needs DMA which USB doesn't provide. Is there a similar blocking technical reason as to why no 10 gigabit Ethernet adapters appeared using the USB C port (but not Thunderbolt)? GigE over USB 3.1 Gen 1 (and even USB 2.0) is everywhere, is there something in the 802.3an standard that blocks this?

I understand the economic situation here, that the more expensive laptops with USB C also are Thunderbolt 3 capable and it's somewhat unlikely the owners of cheap ones will want a 10 GbE adapter. This question, however, is about the technical reasons (as the economic argument can be countered with USB C being much cheaper than Thunderbolt 3 but again, let's leave that argument for different sites).


Solution 1:

I will have another shot at answering, this time being explicit rather than implicit.

For example, it is practically impossible to create FireWire adapters over USB because FireWire needs DMA which USB doesn't provide. Is there a similar blocking technical reason as to why no 10 gigabit Ethernet adapters appeared using the USB C port (but not Thunderbolt)?

There should not be any reason why 10GbE cannot exist. The 2.5GbE and 5GbE USB controllers by Aquantia and Realtek are proof that >1GbE adapters can and do exist. 10GbE over USB could come, but as I said, after overhead, 10Gb/s USB looks more like 6Gb/s. There is also the matter that the vast majority of USB controllers are bottlenecked by PCIe because vendors do not wish to sacrifice 4x PCIe 3.0 lanes for two ports, making it even worse. I have seen plenty of dual-port USB 3.1 Gen2 controllers (20Gb/s total) backed by PCIe 3.0 x1 (8Gb/s). Thunderbolt 3 controllers are the most common USB 3.1 implementation that is not bottlenecked and can provide full 10Gb/s USB, but as we know, if you have Thunderbolt 3, then there is no point.

As somebody mentioned, the bottleneck never stopped them making 1GbE USB 2.0 adapters, but these were made because it was still an improvement in speed over 100Mb/s adapters. Because 10GbE over USB looks more like 6Gb/s, they are more likely to make 5GbE adapters (as they did). The only advantage of making a bottlenecked 10GbE adapter over a full 5GbE adapter would be connecting to 10GbE switches without NBASE-T support at >1GbE speeds. This will go away with USB 3.2 (20Gb/s over USB-C), but the issue where the USB controllers tend to be bottlenecked will only get worse.

USB has poor latency. It might not be as critical for Ethernet as it is for graphics processing units - but when doing 10Gb/s, it might still make a difference. Even with my Thunderbolt SANLink3 N1, Windows struggles to get 10Gb/s down the pipe (Linux does it without even flinching, so maybe that is a driver issue). I have seen people comparing the Apple Thunderbolt 2 to Gigabit Ethernet adapters to USB ones and saying that the Thunderbolt PCIe ones are more power efficient and use less CPU. This would be 10x noticeable with 10GbE. When the 5GbE USB adapters finally hit the shelves, I am sure somebody will test and compare the overheads, CPU usage and power efficiency with the Thunderbolt and / or the pure PCIe NBASE-T cards. Then we will know for sure.

Power consumption is an issue - because of the large distances they have to support. I am fairly sure that over short cables, 10GbE does not consume nearly as much power. My SANLink3 N1 can use about 9W for the Ethernet controller and 2W for the Thunderbolt subsystem, according to the manufacturer (Promise Technology). Thunderbolt 3 ports guarantee 15W for bus-powered devices, but USB comes with no such guarantees. The manufacturer would probably receive a lot of complaints about problems which are caused by inadequate power provided by some (or many) ports. They would need to provide a supplementary power input which would make the product less desirable because of decreased convenience and portability. People would not take to it well, because every single USB-Ethernet adapter in the past has been powered from the bus.

Things that are PCIe only like GPUs usually have a critical dependency on DMA, not only for performance, but baked in to how their drivers are written and how their physical interface is built. The only USB GPUs are DisplayLink chips, and they are not really GPUs - they are just taking a framebuffer from the CPU / OS with a kernel mode driver and turning it into a video signal. On the other hand, USB Ethernet has existed for a very long time and has finally scaled to 5Gb/s, which should imply that it can scale further in the future.

GigE over USB 3.1 Gen 1 (and even USB 2.0) is everywhere, is there something in the 802.3an standard that blocks this?

The reasons I discussed are not technical in the sense of being prohibited in the standards or being almost impossible to work around. There is no technical reason barring the possibility of a true 10GbE USB adapter, but there are too many practical problems for manufacturers to be interested.

I understand the economic situation here, that the more expensive laptops with USB C also are Thunderbolt 3 capable and it's somewhat unlikely the owners of cheap ones will want a 10 GbE adapter. This question, however, is about the technical reasons (as the economic argument can be countered with USB C being much cheaper than Thunderbolt 3 but again, let's leave that argument for different sites).

The technical reasons that are present are not insurmountable - but without mainstream adoption / demand, the manufacturers likely believe that the cons outweigh the pros. USB 3.2 (20Gb/s) should tip the scales somewhat with stronger pros by allowing for 10GbE without a bottleneck (assuming the controller is backed by enough PCIe lanes), but it could take a long while for an adapter to be released for USB 3.2 as adoption will be slow. I do not know if you consider practical and economic limitations to be the same thing (as in, not technical limitations).

If this answer is not adequate then you will need to clarify some more exactly what you are asking and what form the answer would take.

Solution 2:

Here are the official announcements for USB to NBASE-T PHYs:

Aquantia - AQC111U and AQC112U - https://www.aquantia.com/products/aqtion/chips/aqtion-aqc111u-aqc112u/

Realtek - RTL8156 - http://www.realtek.com/press/newsViewOne.aspx?NewsID=454&Langid=1&PNid=0&PFid=1&Level=1

As far as I can tell, products based upon these ICs are yet to make it to become available on the shelves, but it should not be too much longer (I hope).

It is worth noting that Aquantia's AQC111U will be bottlenecked by USB 3.1 Gen1. Although it uses 5Gb/s USB for 5Gbps Ethernet, there will be overheads. I presume the overheads will be similar to those when using USB SSDs for file transfer, resulting in about 280Mb/s over 480Mb/s USB 2.0, which is about 58% for use or 42% overheads. This is from how I observe a maximum of about 35MB/s on USB 2.0. Somebody can correct me if this is a bad assumption.

Either way, they should have used USB 3.1 Gen2 for 10Gb/s for the AQC111U and USB 3.2 for 20Gb/s for a hypothetical future 10GbE USB adapter.

The AQC112U and RTL8156 are unaffected as they are only 2.5GbE adapters and 5Gb/s USB should not bottleneck them.

As usual, it is worth noting for readers who happen to have Thunderbolt 3 ports on their computer, that there are already a good number of 10GbE adapters for Thunderbolt 3, some of which are portable and bus-powered. Thunderbolt 2 will not work with bus-powered devices as the Thunderbolt 2 to Thunderbolt 3 adapters do not deliver power. The only way would be to daisy chain through a powered Thunderbolt 3 dock.

If Thunderbolt is not an option to you, then I guess you are still interested in the topic of this article. It's just worth mentioning as not everybody is totally aware of Thunderbolt and might not have realised it is an option to them.