Can Internet speed decrease the further away a certain (server in a) country is from you?

It taking longer to receive data from further away is an actual phenomenon, but not to the extent that you are seeing.

Assuming a direct line of sight to a target 600 kilometers away, light would take approximately 2 milliseconds to reach its destination. Similarly if the distance were larger, say from Moscow to Tokyo, at approximately 7500 kilometres it would take 25 milliseconds to reach its destination. That's 12.5 times longer. According to Physics.se: How fast does light travel through a fibre optic cable? and Extremetech the speed of light in fibre optic cable is approximately 30% slower than in a vacuum.

That doesn't translate to a direct reduction in bandwidth though, as packets can be requested, queued up and sent out sooner.

The problem is that you cannot get a direct line of sight to any place on the earth, and even fibre optic cables have a maximum length that they can usefully be used over. You need repeaters, routers, firewalls, packet monitors and medium converters (microwave, fibre, and copper) to transit large distances. These things all create choke points and limit bandwidth between places.

It is entirely possible that your country and your destination country have a limited bandwidth link between them. Many countries have multiple links between them and their neighbours and so a link to one neighbour could conceivably be faster to a link to another neighbour. Depending on routing setup it is entirely possible to see the behaviour you mention.

You can have multiple links out to multiple countries and in theory traffic will be routed by the "best" path. Depending on choices made by every router along the way the "best" path may not be the highest bandwidth link for you personally; it could just happen to be the fewest hops, or the lowest latency connection. You have no power to choose your route which limits what you can do to improve matters. There could be higher latency links that have better bandwidth, but you have no means by which to advertise your preference for that link.

Test connections to other countries, if they are all similarly limited then you may have cause to worry, but even that is not a guarantee.

The great firewall of China can be inferred by more than simply its bandwidth limiting; it has a number of active filtering effects on the traffic that passes through it. Sites are blocked, and content is filtered.

One way to test would be to test links to all the countries you can, find the best neighbour and then get a VPN service hosted in that country. If your link is fast through that VPN then there may be filtering in effect in your home country, or it could still just be poor network routing.


Obligatory internet history: The case of the 500-mile email

The amount of data "in flight" at any one time is limited by the TCP window established between the two systems. In some cases window effects can cause slowdowns: https://www.snellman.net/blog/archive/2017-08-19-slow-ps4-downloads/

Plus there's the special considerations for really long distances (TCP in space): http://www.ipnsig.org/reports/TCP_IP.pdf

I would say there are three effects involved.

1) The amount of data "in flight" between the two systems is limited by the TCP window and the round trip time for an ACK. Increased RTT for same window = slower maximum speed.

2) Every router along the way adds some delay. This is more related to how many networks you have to traverse rather than the geographical distance.

3) Finally, national-level firewalls will add another layer of slowdown. Quite a lot of countries have something in place here even if it's only filtering child porn and The Pirate Bay. Russia appears to have one: https://www.theguardian.com/world/2016/nov/29/putin-china-internet-great-firewall-russia-cybersecurity-pact


Well, "the great Russian firewall" can be in place too, degrading the speed. Then it would depend on how much information it collects (just established connection information, full connection content for analysis, etc.). But I live outside Russia and FSB isn't advertising used technologies, so take it just as speculation...

But what's more probably the reason, is your provider. Your provider may have excellent wide home connection, however the foreign connection access is definitely more limited. So if they buy a 1 Gbit/s outside connection, then it depends also on the total aggregation and day time (during the late night there will be fewer people on the net, so you can get more from the total bandwidth of your provider's foreign connection than during 7 PM, when everybody is at home and children on YouTube.

Also the speed to USA or Japan will be likely slower than, for example to Finland or Germany, because the more people must share the same cables with limited total bandwidth.


Yes (and no, it is not internet speed, and it is not speed per se).

Speed

Speed is a very unprecise wording which intermingles two different things that are widely independent but interact with each other: latency and bandwidth.
Also, the speed that you observe is not internet speed. It is a very complex mixture of many things that happen on your end (your computer), on the other end (server) and on several points in between. Which may be a totally different thing with the next server that you access, even if that one is just as far away (or farther).

Bandwidth

Bandwidth is the amount of data you can -- in theory -- push onto the wire per unit of time. There are usually hard and soft limits for that. The hard limit would be what the line is able to take, and then there's what you pay for and what the provider will allow you (usually less!). Often, transfers are not uniform, they start faster and then throttle down very soon.
For example, I have a 96Mbit/s uplink with a physical line capacity of 112Mbit/s. That is because for enhanced stability, less of the bandwidth is used than would be actually possible. However, I only pay for 50Mbit/s (which is way enough for my needs, and 10€ per month cheaper), despite actually getting 96Mbit/s. Wait... how does that work? Why would anyone pay more money then? Well, I transmit everything at 96MBit/s, but the provider will, after a very short time (less than 0.1 seconds) covertly block me, and only allow more data to be sent/received once enough time has passed so I'm within the quota that I paid for. Thus, on the average, I have my 50Mbit/s. Very similar things happen at several locations within the internet where your traffic will pass through, too (without you ever knowing). Traffic is being "shaped" according to importance, sometimes with unknown metrics, and (while controversial and disputed, see "net neutrality") according to who owns the cable and what people pay.

Bandwidth on the internet is, for the most part, so huge that -- except during multi-nation-wide DDoS attacks -- it is not a limiting factor in any way. Well, in theory, and in most parts of the world, that is.

There are however bottlenecks: One is at your end, the next obvious one is at the server's end, and there exists the very real chance that if you interact with a server in a different geographical location, especially a third world country, that total bandwidth will be significantly worse than either of the two. Some countries in south-east Asia have international uplinks that are not much higher than what a handful of individual home users have in other countries (or even in the same country). I don't know if this is still the case (things change ever so fast in the world), but for example in Thailand, accessing a server within the same country used to be 4 times faster than accessing a server in another country, for just that reason. The same would hold if you tried to access a server within their country.

Even though bandwidh within your location may be high, it is the slowest connection in the chain that limits how much data you can push through (just like in a water pipe). Longer distance means there is generally more opportunity for encountering a slow (or congested) link.

Latency

Latency is the time it takes a signal to arrive at your location (or any particular location) from some point.

First, there is the speed of light, which is (not) constant and, being a hard physical limit, cannot be worked around. Why am I saying "(not) constant"? Well, because reality is even worse than theory. The speed of light is really an upper bound, measured in vacuum. In a copper cable or even moreso in a fiber optic cable, the measurable speed of light is easily something like 30% slower than in vaccum, plus the actual distance is longer. That's not only because the cable is not in a perfectly straight line, but also because the light travels along the fiber zig-zag, bouncing off the walls (total internal reflection). It is a tough challenge (this means: impossible) to make the speed of light significantly faster. Not that you couldn't do that by using a different medium, but a medium with higher speed of light means changing the index of refraction, so you reduce, and eventually lose, total internal reflection. Which means unless the signal goes in a perfectly straight line, the signal doesn't arrive at the other end any more!

Thus, in summary, there is a more or less fixed delay which is unavoidable, and while not noticeable in local (LAN, or some few kilometers) transmissions, it becomes very noticeable as the signal goes across half a continent. In addition to this hard physical limit, there are delays introduced by intermediate routers, and possibly your local uplink (the infamous "last mile").

For example, on a typical ATM-based home internet connection, you have a delay of about 4 ms only for your datagrams being needlessly encapsulated in PPP and chunked up in 53-byte sized ATM frames, being sent over to the DSLAM, routed within the provider's ATM network, and being reassembled before entering an IP network again. The reason why this is done is historic. Once upon a time, ATM seemed like a good plan to enable low-latency high-quality phone calls over long distances. Once upon a time, that was in the 1980s, but alas, telecom providers move slowly.
Even for many installations that habe "fiber" in their name, in reality copper wire is used for the last dozen meters, the fiber not rarely ends in the street (though real fiber to the basement does exist).

A typical internet router will add something in the range of 0.05 to 0.2 milliseconds to your delay, but depending on how busy it is (any maybe it's not top notch), this might very well be a full millisecond. That's not a lot, but consider that having 6-8 routers in between you and the destination server is not at all unusual, and you may very well have 12-15 of them on a longer distance! You can try running tracert some.server.name to see yourself.

A line that has been cut and tapped by the NSA or the SVR (so basically every main line going from/to the Asian continent, or across the Red Sea, Indian Sea, or Atlantic Ocean) will have at least another two milliseconds or so of latency added for the espionage stuff that they're doing, possibly more. Some nations are known to (or at least highly suspected) not only observe content and block certain IP ranges, but to even do some extensive active filtering/blocking of politically/ideologically inappropriate content. This may introduce much longer delays.

Thus, even for "nearby" locations, you can expect anything from 15 to 25 ms of delay, but for something in another country, you should expect ~100 ms, on another continent 150-250 ms, if you are unlucky 400-500 ms.

Now, despite all, it would seem like this doesn't make that much of a difference because this is only a one-time initial delay, which you hardly notice. Right?

Sadly, that is not entirely true. Most protocols that transmit significant amounts of data like e.g. TCP, use a form of acknowledge-driven bandwidth throttling, so the amount of data that you can push onto the wire depends on the time it takes to do a full round trip (there and back again). This is not 100% accurate because TCP attempts to optimize throughput by using one of several rather complex windowing algorithms that send out a couple of datagrams prior to waiting for acknowledgement.
While this can somehow mitigate the effect, the basic principle however remains: What you can send (or receive) is finally bound by the time it takes for acknowledgements to come in. Some other protocols with more stringent realtime requirements and less important reliability requirements (think IP telephony) use a different strategy with different issues (which I will not elaborate).

You can see what a big impact latency has if you compare a poor TCP implementation (Microsoft Windows) with a better one (Linux). While they both speak the same protocol and seemingly do the exact same thing, they do not cope with latency compensation equally well.
I own a desktop computer (6700K processor, 64GB RAM, Windows) and a Synology DiskStation (low-power ARMv8 chip, 1GB RAM, Linux). The desktop computer, connected to the same router, while being many times more powerful, cannot fully saturate the 50 Mbit/s line when downloading from national or within-EU servers (15-20ms RTT), even with several concurrent downloads in flight. The meek DiskStation has no trouble with completely saturating the line on a single download, getting 15-20% more throughput -- same cable, same everything.
On my local area network (where latency is well below a millisecond) there is no noticeable difference between the two. That's the effect of latency.

Speed... again

In summary, yes, you can expect "speed" to go down as distance increases, mostly because latency increases, and to some extent because you may have lower bandwidth connections in between. For the most part, the effect should however be tolerable.