Distributed redundant storage solution?

Is there a way to use extra space on computers as a distributed backup system? I'm thinking that using the extra space on the systems in my network would add a lot of storage to the available pool, even if I use a lot of the nodes for redundancy.

---Edit---

I am thinking something along the lines of a program that can be installed as a background process, and can be configured to announce that it has X MB of space available to whatever the server is. So, you add a new node by installing and dropping in a config, and on the server end, it adds another redundant copy of some data and/or X amount of storage to the pool.


There are several ways to access the storage on machines on your network form a simple FTP server to iSCSI but that would not do the logic of adding redundancy or dealing with nodes that are down. If you only need the storage for a simple case like backups written to once a day and accessed rarely you could write this logic into your own backup scripts.

If you need something more flexible you would have to look at distributed file systems such as Gluster. however these are meant for dedicated storage clusters and not just reclaiming some space from another system, possibly a desktop PC's. It might be possible to set up but the time to setup and maintain such a system would not stand up against a cheap NAS.


What you need is called a distributed file system, there are many implementations (MS DFS, OpenAFS, OneFS, etc.) with totally different semantics (some offer mirroring/replication, striping (scale-out), etc.) and rationale/intentions. For a broad overview have a look at http://en.wikipedia.org/wiki/Distributed_file_system.