What is the clock frequency inside 10Gb and 100Gb Ethernet cards?
Solution 1:
You are correct that frequencies that high would be completely unmanageable. Sending one bit per frequency would cause problems for various types of radio transmissions as well. So we have modulation techniques which allow more than one bit to be send.
A touch of terminology: baud, most people will remember that term from the days of telephone modems, is the symbol rate at which a communications medium is operating. A symbol can contain more than one bit, so sending multibit symbols allows higher throughput at lower frequencies.
10MbE (10Base-T) used a very simple inverted Manchester encoding, 10 Mbaud, and a single -2.5v/2.5v differential pair for communications in each direction.
100MbE (100Base-TX) used 4B/5B encoding, 125 Mbaud, and a single -1.0/1.0v differential pair for communication in each direction. So 4/5b * 125 MHz = 100Mb in each direction.
-
1GbE (1000Base-T) uses PAM-5 TCM, the same 125 Mbaud as 100MbE, all four -1.0/1.0v differential pairs for communication in both directions at the same time. The PAM-5 coding allows for 5 states, but the trellis modulation limits each end to 2 at any given time, so 2 bits are sent in each symbol. Thus 125M/s * 4 * 2b = 1Gbps.
Side notes: 1GbE uses only a single pair to negotiate the initial connection. If a cable has only this pair working it can lead to an unresponsive NIC that seems to connect. Also, almost all new NICs can negotiate on any of the 4 pairs, thus enabling auto MDI/MDI-X (but this is not a requirement of the spec). 1000Base-T requires Cat5e cabling. 1000Base-TX simplified NICs, but required Cat6 cable; it never got off the ground for various reasons.
-
10GbE uses PAM-16 DSQ128 coding, 833 Mbaud, 4 pairs as before. The new PAM-16 DSQ-128 with LDPC error correction is sufficiently complicated that I will not try to explain how it works here other than to say it effectively sends 3 bits of information per symbol even over cabling rated for only 500MHz (or less in some circumstances). Thus 833.3 MHz * 4 * 3b = 10Gbps.
Side notes: 10GbE requires Cat6a cabling for 100m operation, Cat6 for 55m, and may work with Cat5e for very short cables. Cabling other than Cat6a should be discouraged because of the variation from the 100m standard length. Also, older NICs didn't have the gain necessary to send 10GbE over 100m distances and were limited to shorter cables - see manufacturer for details if you have a first generation 10GbE NIC.
-
40GbE and 100GbE have no finalized copper standards at this time. There are two 40GBase-T proposals. The first uses the same techniques as 10Gbase-T, but 4x faster, and requiring cabling certified for ~1600MHz. The second uses PAM-32 DSQ-512 and requires cabling at ~1200MHz (the higher complexity would mean relatively expensive NICs). Both are likely to use LDPC to allow the use of slightly underrated cabling.
Connectors: Neither 40 nor 100GbE will use the C8P8 (colloquially RJ-45) connector, but likely a variation of it called GG45, with the 4 pairs at the 4 corners of the connector. There is also an intermediate connector, the ARJ45-HD with pins for both 10MbE-10GbE (RJ-45) and 40GbE-100GbE (GG45). TERA is a competing connector rated for 1000 MHz, it seems unlikely to become the new standard.
Cabling: Cat7 and Cat7a are cabling standards rated for 600 MHz and 1200 MHz. They were originally called CatF and CatFa. Cat8.1 and Cat8.2 have been proposed with ratings for 1600 and 2000 MHz.
There is some debate as to whether there will be a 100GBase-T standard as, with current technology, Cat7a, Cat8.1 and Cat8.2 will only carry such connections 10m, 30m, and 50m respective. Cat7a and up are already dramatically different cables from Cat6a and below, requiring shielding around both individual pairs and the cable as a whole. The testing that suggests these connections are possible does not demonstrate a commercially viable implementation either. There is reasonable speculation that more advanced/sensitive circuits could carry 100GbE at some point in the future, but it's only speculation.
Worth mentioning: 10GBase-R, 40GBase-R, and 100GBase-R are a family of fiber specifications for 10, 40, and 100GbE which have all been standardized. These are all available in Short (-SR, 400m), Long (-LR, 10km), Extended (-ER, 40km), Proprietary (-ZR, 80km), and EPON/x (-PR/x, 20km) ranges. They all use a common 64b/66b encoding, 10.3125 GBaud, and simple use more "lanes" for additional capacity (1, 4, and 10 respectively) - lanes being different wavelengths of light on the same fiber cable. A 200GBase proprietary implementation is working it's way to standardization, though with modulated DWDM frequencies and ranges up to 2Mm.
Solution 2:
Chris S already gave the correct answer: bauds, not bps.
But besides, 5GHz is not "awfully hight for transistors to support". There are teraherz transistors commercially available.
Of course, a GHz signal on a transmission line would be incredibly hard to shield from noise for more than a few millimeters. Optical signals, on the other hand....