Why are internet speeds variable and not fixed numbers?

Assume you have 10mbps speed. So why when you download something from the internet, your speed is not going to be fixed at 10mbps, but instead it varies from 2-9mbps while you're downloading? Why do internet speeds work like this? For example, hardware components like GPU and CPU run at fixed numbers in full speeds, so why don't network speeds work the same way?


Solution 1:

The connection between you and your local ISP mostly does work at a fixed connection speed. The main problem is that you are competing with other people on the internet for access to resources.

Your ethernet connection on a local network will be at a fixed 100Mbps or 1gbps, transfers to or from another machine on your local network will be at that speed. If the speed drops then likely it will be due to one or both machines trying to do something else at that time, either seeking for something else on the drive, or the CPU is busy elsewhere. On a mostly idle machine you will get nearly full speed for bulk transfers. Small files being transferred hit limits of latency, you can transfer small chunks of data faster than they can be processed (seek, read, write, etc) at either side.

The internet has similar problems, but you are also competing with other users. They all have demands from servers, they all use the same pipes for different amounts of time to transfer different amounts of data.

Your ISP may well have a different path to a server than your neighbour using a different ISP, their path across the internet may be faster or more efficient. Your path to somewhere else may be better than theirs. The paths may be constantly changing. The internet is more of a live changing network that detects bottlenecks, works around holes and dropouts, seeks out the current best route and only guarantees that data will get to its end point, not how it will get there.

Speed varies as demand, routes and environment varies. You have no control of the data once it has left your router or modem.

Wifi is subject to a lot of environmental noise such as baby monitors, headsets, other wifi networks and so on, it's speed might claim to be high but can be somewhat unpredictable from one moment to the next.

Solution 2:

Your local connection has a max speed of 10 Mb/s.

But what about the source of the data on the other end of the connection? That can be faster or SLOWER than your connection.
If it is slower there is no way it can deliver the data at 10 Mb/s to you.

(Bear in mind that world-wide many types of Internet connection are asymmetrical. Download is a lot faster than upload for most users.)

In addition to that there are also bottlenecks in the internet at large, outside your (or your ISPs) control.

So your ISP can offer you a max speed, but your actual speed depends on a lot more factors than that and will in general be lower than your theoretical maximum.

I happen to have a relatively fast Docsis cable connention with 500 Mb/s down and 40 Mb/s uplink. You can tell from that already that although I can in theory download at 500 Mb/s I can only upload at 40 Mb/s. So if I send something to my neighbor (also on such a connection with the same ISP, so minimal interference from any other stuff) he will only receive it at 40 Mb/s because I can't deliver any faster than that.

In fact: Even though I can in theory download at 500 Mb/s I rarely manage to get more than 300 Mb/s. I have to work hard for that using multiple computers in parallel all downloading various things at the same time. That is mainly just because the various services on the Internet that provide the downloads have their own upload speed limits.
Some of these limits are hardware/ISP dependent on their end. Others are software controlled on their end because they don't want a single customer with a fast download hogging all the available upload bandwidth on their end end leave nothing for other customers. So they cap the upload available to a customer to a reasonable maximum.

Solution 3:

It depends on several factors. In some cases you will see fairly static speeds, e.g. I had a very stable (if miserable) 8 Mbps download over ADSL at home, and when I copy files at work I typically see nearly flat graphs at ~980 Mbps.

  1. Some connection types have a fixed rate, e.g. an Ethernet connection negotiates 1 Gbps once and sticks with it. However, other connection types – such as those running over radio, or power lines, or other "less than reliable" media – automatically adjust their link rate depending on the environment, e.g. the signal strength, packet loss, and/or the number of corrupted packets.

    So if you're using Wi-Fi, the link rate can rapidly drop as people walk around and absorb your signal; and even in stable conditions it still won't stay static as your device occassionally probes higher rates, decides they're not good, returns back. (See the "Minstrel" algorithm for a widely used example.) The same applies to LTE and other "wireless ISP" connections.

  2. Many connections are oversubscribed. For example, in an office, even if you have a 1 Gbps Ethernet port personally, it might go into a switch which then shares just a 1 Gbps uplink for the entire office. So if your neighbour also starts a large download, this will cause your download rate to suddenly halve as the two of you have to share the single gigabit link. Similarly, in FTTH, it could be that you have 50 neighbours all downloading games or watching 4K Netflix over an oversubscribed uplink – each of them getting a proportion of the total available speed. As their usage changes (e.g. video stream stops), the proportion available to everyone else also changes.

    The same can occur at any point – it could be that the server is trying to squeeze 200 downloads through its uplink, and it could be that the connection between two ISPs is getting congested during this time of day. So if hundreds of customers are downloading the same thing over the same 10 Gbps connection, they will all see varying speeds as connections come and go and the proportion of the link that each user gets keeps changing.

  3. Downloads over TCP use a congestion control algorithm to make sure the sender doesn't just flood the network with data, but sends it at a rate which the receiver can accept. Most of the commonly-used algorithms will reduce the transmission rate upon seeing packet loss, then slowly ramp it up again. Some servers could be using an outdated or mistuned algorithm which overreacts and reduces the transmission speed much more than it needs to.

    (Sometimes the opposite happens and the congestion control algorithm doesn't react correctly, e.g. "BBR[v1] does not back off if packet loss is detected. But in this case the packet loss is caused by congestion. Since BBR[v1] has no means to distinguish congestion related from non-congestion related loss, point (B) is actually crossed, which can lead to massive amounts of packet loss")

Solution 4:

As the late Senator Stevens said, The internet is a series of tubes.

"Ten movies streaming across that, that Internet, and what happens to your own personal Internet? I just the other day got... an Internet was sent by my staff at 10 o'clock in the morning on Friday. I got it yesterday [Tuesday]. Why? Because it got tangled up with all these things going on the Internet commercially. [...] They want to deliver vast amounts of information over the Internet. And again, the Internet is not something that you just dump something on. It's not a big truck. It's a series of tubes. And if you don't understand, those tubes can be filled and if they are filled, when you put your message in, it gets in line and it's going to be delayed by anyone that puts into that tube enormous amounts of material, enormous amounts of material." https://en.wikipedia.org/wiki/Series_of_tubes

While this speech was mocked relentlessly at the time, It's not a bad metaphor.
If we consider the old myth about municipal sewage being overwhelmed at Super Bowl Half time shows, it even helps.

You are competing for a scarce resource. And if everyone else is competing at the same time, latencies build up. If more data is being transferred than the tube can handle, it has to back up.
In a very simple example, we have a router, and a cable. If you are the only person accessing that network, the router will route your packets as fast as possible -- utilizing the entire bandwidth of the cable. But when your room mate logs on to do his "research" Now you're sharing that cable, and the router will only give you the cable for 50% of the time, allocating the rest of the time to your roommate. And your apparent speed is cut in half.