Huge File System?
Two options that come to mind are GlusterFS and Hadoop HDFS.
IBM's GPFS can do this (note: not open-source).
With GPFS you can create Network-Shared-Disks (NSDs) that are comprised of any type of block-storage (local or presented via iSCSI or FC, for example). It would be entirely possible to create a GPFS filesystem (device) that is comprised of NSDs that span each 2TB hard drive across your 100 servers.
I won't pretend to recall all the crazy marketing numbers, but GPFS is among the most popular clustered file systems for supercomputers in the "Top 500" list because it supports extremely large volumes of data and incredibly high parallel I/O. Wikipedia has some numbers.
GPFS can replicate data and metadata blocks across the filesystem. When you create your NSDs you define a "failure group", you do this so that GPFS does writes your block replicas elsewhere (that is, you don't want both copies of your block in the same failure group). You can also tier storage using their concept of "storage pools", through which you can define behavior like...files accessed in the last week live on my Fusion IO or SSD drives, but after that move the blocks to cheaper storage.
All the nodes in your cluster would have access to a device (like /dev/gpfs0) which they could mount and access as it the entire file system were local to each node. You mentioned NFS; in this model however, it is not necessary to introduce the additional protocol unless you have systems outside the 100 node cluster that are acting as consumers/clients of the data and you do not wish to make them GPFS clients/NSD servers (by loading the GPFS kernel module). However, you can trivially export GPFS file systems via NFS and even leverage Clustered-NFS (CNFS) if necessary.
- I do not work for IBM, but I have played with GPFS a little bit and I liked it.