We have our project files stored on a Windows 2008R2. Now the space on that 4TB drive is filling rapidly and we are soon either forced to expand the drive or move older file out from the drive. The question is (someone in our IT team claimed this) will there be problems with some applications if we expand the share to over 4TB. Our organization uses some older applications which are claimed to have problems, but noone can say for sure if there will be problems or not.

So, does getting over the 4TB threshold cause problems with older applications on shared drives? 4TB has been a problem on local drives on older computers, but will it be problem on shared drive on client applications?

Technical information: The server is virtual server VMware ESXi 5.1. The 4TB drive is direct iSCSI-drive (not via VMware) from Dell Equallogic.


The only concern that I have is that 4TB is, in my experience, a pretty big NTFS volume. CHKDSK has gotten a lot better in the last few Windows releases, but you will likely still have a multi-hour outage if you take filesystem corruption on a volume that big. (Fewer large files would make a faster CHKDSK run as compared to more numerois small files.)

If such an outage is acceptable then I think you're fine to grow the volume. Windows can definitely handle it.

You might consider relocating critical files for which you might want to maintain more availability to another, smaller NTFS volume and using a mount point or DFS-N to "glue" it to the larger volume.

Having seen multi-hour CHKDSK runs I am somewhat reticent to use NTFS volumes that are so large in production. At the very least, I try to use them for "archival" data that can tolerate some loss of availability.


Edit: I don't get too concerned about applications. Microsoft's Application Compatibility Toolkit (ACT) contains a lot of functionality to "coerce" unwilling applications into working. As an example, the EmulateGetDiskFreeSpace fix can cause Windows to fabricate a free space number allowing legacy applications that have integer overflows with >2GB of free disk space to work.

I've had a lot of success getting finicky applications to work using the ACT.


They're might be issues. The question is how low your application will be layered when accessing the filesystem. Normally they're should be no issue if Windows can handle it as your applications should use the Windows API to access the filesystem on a lower level.

Of course better be safe than sorry, so put it to the test before moving to production.


I'd be more concerned about the direct iSCSI mapping of storage (versus something that's on the vSphere cluster and has those protections).

You can increase the LUN size on the Dell, so there are obviously more storage resources available to you. But at this point, would it make more sense to create a new LUN and move files to it? If that's an option and your application(s) don't require a single contiguous partition, that's what I'd do. This is more of a management suggestion and not a technical limitation. This is already a GPT disk and the magic barrier was 2.2TB on partition size.