Does a distributed filesystem have to consist of multiple filesystems located on different computers?
NFS is as a distributed filesystem because it uses a network protocol to manage data access between the server and clients. However, it is very basic when compared to modern distributed filesystems like ceph or glusterfs. NFS simply provides a distributed access to a local file system located on the NFS server, while ceph and glusterfs provide the access to a distributed data store, where data is distributed between multiple servers.
In the past as distributed filesystem was meant what we today we call a shared file system. Noways, under distributed filesystem we assume a filesystem distributed among multiple servers.
NFS is a Network Attached Storage, where a file system exposed to multiple client. Though the under laying filesystem is possible to be distributed over multiple nodes, for example when a cephfs is exported via NFS, with NFS v2, v3 and v4.0 the clients are accessing the data through a single NFS node. Thus exporting large distributed filesystems with NFS was not effective.
With NFSv4.1/pNFS data on nfs server can be distributed over multiple so-called data servers. pNFS has a concept of metadata server, or MDS and data server - DS. A client talks to MDS for namespace operations and to data servers for actual IO. The bandwidth and storage space grows with number of data servers.
There are several solutions that provide NFSv4.1/pNFS. For example, dCache (I am one the the developers), which exposes hundreds of Petabytes distributed on a dozens of data servers, or Hammerspace, which allows to aggregate existing nfsv3 servers into a single distributed storage.
The pNFS support is built into Linux kernel 3.9.