What Makes Cloud Storage (Amazon AWS, Microsoft Azure, google Apps) different from Traditional Data center storage networking (SAN and NAS)?

This answer has been edited after the question was clarified.

What are other reasons effects clouds to prefer DAS

Where "DAS" means Direct Attached Storage, i.e. SATA or SAS harddisk drives.

Cloud vendors all use DAS because it offers order-of-magnitude improvements in price/performance. It is a case of scaling horizontally.

In short, SATA harddisk drives and SATA controllers are cheap commodities. They are mass-market products, and are priced very low. By building a large cluster of cheap PCs with cheap SATA drives, Google, Amazon and others obtain vast capacity at a very low price point. They then add their own software layer on top. Their software does multi-server replication for performance and reliability, monitoring, re-balancing replication after hardware failure, and other things.

You could take a look at MogileFS as a simpler representative of the kind of software that Google, Amazon and others use for storage. It's a different implementation of course, but it shares many of the same design goals and solutions as the large-scale systems. If you want to, here is a jumping point for learning more about GoogleFS.

stated later in the paper, Clouds should use SAN or NAS because of DAS is not appropriate when a VM moves to another server

There are 2 reasons why SAN's are not used.

1) Price. SAN's are hugely expensive at large scale. While they may be the technically "best" solution, they are typically not used at very large scale installations due to the cost.

2) The CAP Theorem Eric Brewer's CAP theorem shows that at very large scale you cannot maintain strong consistency while keeping acceptable reliability, fault tolerance, and performance. SAN's are an attempt at making strong consistency in hardware. That may work nicely for a 5.000 server installation, but it has never been proved to work for Google's 250.000+ servers.

Result: So far the cloud computing vendors have chosen to push the complexity of maintaining server state to the application developer. Current cloud offerings do not provide consistent state for each virtual machine. Application servers (virtual machines) may crash and their local data be lost at any time.

Each vendor then has their own implementation of persistent storage, which you're supposed to use for important data. Amazon's offerings are nice examples; MySQL, SimpleDB, and Simple Storage Service. These offerings themselves reflect the CAP theorem -- the MySQL instance has strong consistency, but limited scalability. SimpleDB and S3 scale fantastically, but are only eventually consistent.


If you use DAS then availability is your problem

If they use DAS then availability is their problem. And if they're any good, they'll be using several layers of abstraction to ensure their problem doesn't become your problem. Rather than being hung up on how they choose to mount their disks inside their datacentre, the issue is whether or not the availability they guarantee in their SLA is adequate for your needs. Oh, and the real elephant in the room, what do you do if they go out of business (not likely for some providers perhaps but you should still consider it) and what do you do if you use this data locally and your interweb connection is unavailable - the latter one is substantially more likely than their choice of DAS directly leading to an outage.