Does the meaning of the term WWW mean it has to be performed by HTTP servers- by definition? [closed]

Solution 1:

In the early days of the web many web sites were served via FTP.

Individual internet connections were very rare, so if you had internet access it was likely through your employer or school. You might want to set up a web site, but you couldn't get the system administrator to run an HTTP server for you. But there was probably an anonymous FTP server, already set up to allow anonymous retrieval of files via FTP. You could put your HTML files in the public FTP area, and advertise the URL for your files—it looked like ftp://host/path/—and you could have a web site that way without asking the sysadmin for anything new. Publishing web sites this way was quite common around 1992–1994.

Here's an example web site I found for you. This dates to fall of 1995, and was and still is served by FTP. (It's still there because nobody has ever bothered to remove it.) In those days the University of Pennsylvania computer science department had no HTTP server, but the department did have a web site, with links to pages for courses, contact information, and personal web sites of the department member who cared to build them, all served by FTP from the anonymous FTP directory.

Without this ability to bootstrap from FTP, the web might never have been able to get started.

[ Added later: Here's a better example ]

Solution 2:

The web actually can exist without HTTP - it simply depends on what you are trying to do. If you write your own client and server, you can surely develop and implement your own protocol, and it will (hopefully) work.

However, Google is trying to make a worthy substitute :-)

Solution 3:

To answer your question immediately: No, the World Wide Web as we know it now does not depend on HTTP. It has never depended on HTTP. All it requires is a protocol over a reliable transport which a client can use to request a resource from a server. Anything with these minimal requirements will do. It uses HTTP now because that was the best protocol available when the Web was first becoming popular. When something better than HTTP comes along, as appears to be the case with SPDY, then HTTP too will fade into history as protocols before it have.

It does, however, depend on HTML and to a lesser extent the various technologies that have grown around it, such as CSS, JavaScript, etc. Even today's HTML 5 would be recognizable as HTML 20 years ago and mostly parseable by browsers of that time, and a well crafted web site of today will actually work in the oldest known browsers (as a well crafted web site of 20 years ago will actually work in today's browsers).


The remainder of this answer is drawn from my experience and can be skipped, but...

So far the existing answers have mostly cited references from today, which is unfortunate, since today's Internet contains very little information about the times prior to the "dot-com boom" of the late 1990s. Some of these references don't match my experience: I was on the Internet years before it opened to the public, and I had a good view to watch the rise of the Web.

HTTP was designed to be a protocol which was efficient at transferring web pages and other files to and from servers. It addressed various shortcomings in FTP that make it a less than entirely practical choice for serving web pages. In particular, at the time FTP was mostly used in "active" mode, as firewalls and NAT were mostly nonexistent. FTP had a "passive" mode from 1985, but it wasn't really necessary until large parts of the Internet started moving behind their iron curtains. In either mode, having to open multiple connections to transfer files was inefficient at best; HTTP could dramatically outperform FTP (or even Gopher) which was important when virtually everyone's home connection was dialup, and very slow dialup at that.

And while a few web pages were served via Gopher, this was mostly because common web clients of the era supported several protocols: HTTP, FTP and Gopher. They had to, in order to gain mindshare. At this time, "searching the Internet" was done with a program called Archie, and that only told you about files on FTP sites. You then had to use Veronica or Jughead to search Gopherspace. There was also WAIS, perhaps the first significant full text search engine, but what Wikipedia won't tell you about that is that it was vastly overengineered crap, and you couldn't find anything unless you knew what site to look for it at to begin with.

I still recall, in 1995 or so, having conversations over several weeks with an AIDS researcher about the Web, and trying to persuade him that he should try out this Mosaic thing. What finally convinced him is that Johns Hopkins had just put up a medical database that he needed on the Web, via HTTP, and a web browser was the only way to get to it. I had many conversations with various people along similar lines.

Back then, in order to gain a foothold, web user agents would commonly support FTP and Gopher, so that people could use a single program to view or download any resource via any of those protocols. It worked and the Web took off, but even downloading a text-only web page was painfully slow at 2400 bps, and many people (myself included) still had nothing better when the Net was finally opened to the public. It was often faster to telnet into your Unix shell account and run lynx there or telnet to the public lynx that the University of Kansas ran. The university had plenty of bandwidth, and that way you only had to see a screen at a time (it took about four seconds to refresh an 80x24 terminal at 2400 bps).

So, from a single program, whether it was lynx, mosaic or the reference client that CERN wrote but nobody really ever used, you could access virtually anything on the Internet at the time, as these programs generally hid or de-emphasized the specific transport being used. (That is, nobody looked at their address bar even then. And Lynx wouldn't show the current URL unless you specifically asked for it.)

Since HTTP was faster and more flexible than other protocols, and HTML was clearly a more powerful language for representing a document than was previously available, its taking off was pretty much inevitable. Gopher never had a chance; it existed in significant form for only a few years. And FTP still remains useful since it's slightly better at transferring large files or whole directory structures at once (assuming you have tar and gzip and know the secret incantations) and until recently it was better for uploading data.

The point I'm trying to drive home here is that the Web is transport-agnostic. It had to be in order to get started, and the fact that it is means that it almost certainly will continue to be in use for decades -- or even centuries -- to come.

Solution 4:

There are plenty of protocols that can deliver files of information but none have the efficiency of HTTP.

Indeed, there were several ways of getting information over the internet before HTTP came along. Have a look at Gopher for example.

However, HTTP was specifically designed to deliver web pages efficiently. It has played as much a part of the Internet's success as HTML, CSS and JavaScript.

Solution 5:

I think three things were needed in order to allow the world wide web to form:

  • the Internet
  • the URI
  • the ability to link URI's in documents (HTML).

The URI could specify any type of protocol: ftp://, http:// etc. You can see a bunch of them on wikipedia. Combine any URI scheme with a document that can link to other documents (of which http/html is about the easiest) on the Internet and you have the world wide web.

As others have shown, ftp can serve web pages. That created the first basis for the web, however, I don't think FTP has any support for CGI, which was the next step towards interactive web sites.

Today, CGI has been replaced with frameworks that are integrated with http servers, but the core interaction still has CGI at its heart, using various HTTP verbs to facilitate sending and receiving form. The WWW as we know it today, would not work without http, but the early WWW started out with FTP as a strong component.