How to minimise storage consumption?

I have a network with Live, User Acceptance, staging and development servers (In this case windows mainly 2012r2, all Hyper-V guests). Each of these parts of the network has a frontend and backend server. The backend servers contain proportionally large amounts of data. Across the User Acceptance, staging and development servers this data does not change (apart from the occasional refresh from live) and it rarely accessed outside the development cycle.

In this type of environment how do you minimise storage consumption and avoid wasting storage space on static and rarely accessed data. The data consists of 1000's of files such as pdf, json, dwgs and pngs.

Things I've considered.

Deleting servers while not in use - Not a great option as sometimes the time to restore these servers out weighs the time developers are going to use them. Our backup solution is MS Data Protection manager.

Deleting data discs while not in use - Slightly better than above but again time is factor.

Moving data discs between servers - As they are Hyper-V guests I could just attach the data discs as required, however there are times where more than one environment is in use at the same time.


You might want to check out some hybrid file servers, one offloading cold data to public cloud where storage is cheap (S3) or nearly free (Glacier). If you have Enterprise agreement in Azure you might want to try StorSimple from Microsoft, both physical and virtual.

https://azure.microsoft.com/en-us/services/storsimple/

Nasuni is also nice, but doesn't have reliable SMB3 so far.

https://www.nasuni.com


There are a lot of interesting solutions on the market, I haven't tried Nasuni but looks like a fit. Also, you may take a look at Aclouda which can be used as a hardware gateway and present cloud storage as a local drive and offload data to the cloud automatically. It can be installed on a local server as SATA/SAS drive with connectivity to the cloud either Amazon or Azure. http://aclouda.com/