What is a PCI-Express Lane?

I am reading an article bemoaning i7-5820K Will Only Have 28 PCI-Express Lanes compared to its sibling processors having 40 lanes.

Isn't 28 lanes already too many? How many lanes would a normal home PC actually need and for what purposes?

I don't know how the following would be connected to PCIe, but do they even number 28?

2 HDs, 1 SSD, 1 CD-DVD-BR, card reader, printer, wifi or lan but seldom both, joystick, keyboard, mouse, graphics.

What else possibilities would need direct access to PCIe for a home/office PC? Or even a server.


Many devices use more than 1 lane.

For example - gaming graphics cards use 16 lanes. Some powerful gaming computers have two graphics cards - that's 32 PCIe lanes (two x16 ports).

Intel i7-5820K can't handle two x16 graphics cards. For some gaming enthusiasts or some engineers, that may be a serious problem. They may have to choose different CPU (maybe some Xeon) if they need more than 4 cores and two x16 graphics cards.

PCIe SSD drives use multiple PCIe lanes too (x4 or x8).

Many gigabit network adapters use PCIe x4, there are also 10-gigabit server adapters and they use PCIe x8.

28 lanes is not that much. If a motherboard manufacturer puts one x16 slot, one x8 slot and one x4 slot (x28 total) - you can use only 3 devices there and... that's all.

Here is an image from Wikipedia PCIe article. I added information about lanes on these PCIe slots.

enter image description here

You can read more in another answer written by reirab.


A PCIe 'lane' consists of 2 differential pairs of signals. One differential pair is used for sending and the other is used for receiving, which allows simultaneous bi-directional communication. Each lane is point-to-point. That is, each lane directly attaches a single host to a single device. PCIe switches can, however, be used when a host lane needs to be shared between multiple devices. Per Wikipedia, the bandwidth of a single PCIe lane (in each direction) is as follows:

  • PCIe 1.x: 250 MB/s
  • PCIe 2.x: 500 MB/s
  • PCIe 3.0: 985 MB/s
  • PCIe 4.0: 1969 MB/s
  • PCIe 5.0: 3.9 GB/s

As Kamil said, most PCIe devices use multiple lanes. Some devices, such as NICs, sound cards, and other relatively low-bandwidth devices just use 1 lane. SSDs, RAID controllers, and other medium-bandwidth devices typically use 4 or 8 lanes. Graphics cards and other high-bandwidth devices (FPGAs, for instance) typically use 16 lanes. At system boot, the host and device will negotiate the number of lanes that will be used for a particular connection. Typically, the smaller of either the number of lanes that the card is wired for and the number of lanes that the slot it's installed in is wired for (i.e. the maximum physically possible) will be negotiated, though the number may be less in cases where so many PCIe devices are installed that the host does not have enough lanes to give each of them its maximum. The physical slots are designed such that devices with connectors for a smaller number of physical lanes will fit in and function properly in larger slots (e.g. a PCIe x4 card will fit in a PCIe x16 slot and will negotiate to run with 4 lanes.)

Also, some chipsets use some of the PCIe lanes to attach the Southbridge. This was how the Intel x58 chipset worked (the chipset for the Bloomfield chips, high-end of the first-generation Core i7 processors.) It used 4 lanes to attach the Southbridge, leaving 36 lanes for everything else. This typically was divided up as 2 16-lane links for graphics cards and 4 lanes for any other devices. Boards that supported 3 or 4 graphics cards would have to reduce some or all of the graphics cards to 8 lanes when 3 or 4 graphics boards were installed.

Having 2 graphics cards is very common in gaming systems and many gaming systems actually have 3 or 4 graphics cards. Even in a 2-card setup, at least one card will have to fall back to x8 mode in a system with only 28 lanes available. Additionally, systems that use graphics cards as computational accelerators often have 2-4 graphics cards installed. For these situations, only having 28 lanes is a problem, as that greatly limits the amount of host-to-device (and device-to-host) bandwidth available to each card. CUDA in particular has been gaining widespread popularity over the last several years, especially in the high-performance computing community. The PCIe bus can very easily become the bottleneck in GPGPU (General-Purpose computing on Graphics Processing Units) applications, so having as many lanes per card as possible is highly desirable in GPGPU systems.