What protocol is used for downloading files?

Say I download an executable like Pycharm from Jetbrains.com. HTTP was used to deliver contents of the website - is this also used when I download the file? I read that FTP was used but also saw it's been disabled for modern browsers - what is the recommended protocol?

Look at the URL shown in your downloads list – if it says http:// or https://, then yes, HTTP was used to download the file.

Nearly all file downloads from websites (and even most downloads not from websites, such as game updates) are nowadays done via HTTP.

There aren't many alternatives. Anonymous FTP used to be more common in the past, but several aspects of its design are problematic nowadays (FTP actually predates the TCP/IP of the Internet), such as its usage of separate "data" connections resulting in firewall-related problems. Anonymous NFS (WebNFS) never became a thing, either.

Also, if there is a network disruption, sometimes I can resume the download without losing progress. Is this because a "session" was created and I can rejoin the session and continue the download?

No; the resumption mechanism is stateless, as is most everything else about HTTP.

When you're requesting a static file (as opposed to a dynamically generated webpage), the browser can ask for a specific range instead of the whole file. For example, if your download stopped after 12300 bytes, you can resume at any time by including a Range: 12301- header.

So as long as the file still exists, all you need to do is keep re-requesting the same URL with an appropriate Range header added. (Browsers additionally use the If-Match header to make sure the file hasn't changed.)

There are websites which offer downloads limited to a specific session (either as a cookie, or a special token embedded in the URL). Those downloads are still resumed using the same range requests as before – while the web server may decide that your URL has expired and prevent continuing the download at all, it has nothing to do with the actual resumption mechanism.

(And, sure, a website could serve the download entirely through a dynamic script. In this case it's up to the programmer whether they handle range requests or not. For example, when downloading a zipped folder from Google Drive, the .zip file is generated on the go; even its "total size" is unknown – in this case, the file likely won't be resumable at all.)


The short answer is yes it is HTTP/HTTPS.

However, I'd like to take your time to demonstrate why the longer answer matters especially to people who are interested in technology.

HTTP is nothing but a file transfer protocol. It is not special. HTTP cannot handle things other than files.

Images - they're just files. Javascript: just text files. Webpages: again, just text files. Videos are files. Even youtube videos are just a bunch of files (a single youtube video is split into hundreds of smaller files around 10 seconds in length so that you can rewind and forward the video, youtube videos are not single files - video downloaders will automatically join the files for you when saving).

The core of how HTTP works is really simple. Indeed it is stupidly simple and this simplicity (that all things are just files to download) is what made HTTP successful compared to the other networked multimedia/interactive protocols. Files, especially text files are something programmers understand.

The complicated bits added to HTTP to make the internet what it is today are added as metadata to the "files". Just like your files on disk have metadata such as file name, created date, ownership etc. files served by HTTP have metadata such as cookies, authorization information, last updated time etc.

Knowing this you should realize that there is nothing magical about the web especially HTTP. It just allows your browser to download files. It is how your browser interprets those files that adds the magic. Still, a http agent does not need to be a browser. You can write a program to download anything available via HTTP as long as you know how to craft the correct request. Indeed most people use curl and wget for this.