Slowest wireless client dictates the connection quality of all others?

Not sure if this is the place to ask this, but I couldn't find a more appropriate StackExchange site. I heard that the quality of wireless connection follows the law of the lowest common denominator - meaning that if 10 users connect to an AP at 50Mbit and one at 5Mbit, everyone gets stuck with 5.

[-]

Can anyone, with 100% accuracy, say whether this is true or not? I'm asking because we have 8-10 WRT54GLs on DD-WRT powering our company network, and wired speeds through those APs are up in the 50-90Mbit, while wireless can't seem to go above 9Mbit.


Solution 1:

While the slow client is transmitting data, due to CSMA-CA, no other client can transmit. A slow client will take significantly longer to transmit its packet of data than a fast client.

Similarly while the AP is talking to a slow client all other wireless devices on that channel will have to wait for their turn. The slower the device the longer that the channel is in use for both transmit and receive packets.

Many APs will have minimum connection speed configurable. This may help speed up fast clients, but older devices and clients will be unable to connect.

This particularly shows up with a bad connection where the combination of a slow connection and, probably more importantly, retries plugs up the capacity sufficiently to effectively block the other channels by consuming most of the capacity of the connection.

Solution 2:

Yes. Generally speaking, a G-only network is about three times faster than a mixed B/G network. Please see the following:

What do I need to transform my network from Ethernet to WiFi?

From the Cisco White Paper Capacity Coverage & Deployment Considerations for IEEE 802.11g

"When 802.11b clients are associated to an 802.11g access point, the access point will turn on a protection mechanism called Request to Send/Clear to Send (RTS/CTS). Originally a mechanism for addressing the "hidden node problem" (a condition where two clients can maintain a link to an access point but, due to distance cannot hear each other), RTS/CTS adds a degree of determinism to the otherwise multiple access network. When RTS/CTS is invoked, clients must first request access to the medium from the access point with an RTS message. Until the access point replies to the client with a CTS message, the client will refrain from accessing the medium and transmitting its data packets. When received by clients other than the one that sent the original RTS, the CTS command is interpreted as a "do not send" command, causing them to refrain from accessing the medium. One can see that this mechanism will preclude 802.11b clients from transmitting simultaneously with an 802.11g client, thereby avoiding collisions that decrease throughput due to retries. One can see that this additional RTS/CTS process adds a significant amount of protocol overhead that also results in a decrease in network throughput."

"In addition to RTS/CTS, the 802.11g standard adds one other significant requirement to allow for 802.11b compatibility. In the event that a collision occurs due to simultaneous transmissions (the likelihood of which is greatly reduced due to RTS/CTS), client devices "back off" the network for a random period of time before attempting to access the medium again. The client arrives at this random period of time by selecting from a number of slots, each of which has a fixed duration. For 802.11b, there are 31 slots, each of which are 20 microseconds long. For 802.11a, there are 15 slots, each of which are nine microseconds long. 802.11a generally provides shorter backoff times than does 802.11b, which provides for better performance than 802.11a, particularly as the number of clients in a cell increases. When operating in mixed mode (operating with 802.11b clients associated) the 802.11g network will adopt 802.11b backoff times. When operating without 802.11b clients associated, the 802.11g network will adopt the higher-performance 802.11a backoff times."

Solution 3:

Based purely on observation I refute that theory. I frequently have machines connecting to the same point at different speeds and none are affected when another connects at a slower speed (other than by virtue of them all sharing the same feed bandwidth.