What tools and concepts are good for someone thinking about using HFS filesystem compression?

I have a client with about 60TB of data on multiple HFS+ volumes, attached directly via fibre channel and shared using AFP. We are running around 85% capacity at present, and the budget to expand storage won't come into play for months. Our data growth suggests we'll hit 90% capacity in four months. I'm considering some sort of in-place filesystem compression that would transparently compress files at rest in storage, while not changing the workflows of the desktop users. (That is, they should just work as usual without having to decompress files.)

I understand HFS+ filesystem compression can be accomplished by using the ditto command; I've also successfully used the free afsctool to compress files. The latter has not been updated for quite some time and I'm unsure of the developer's commitment. I'm not a programmer, so the source code means little to me.

Are there any alternative commercial tools that will silently, automatically perform filesystem compression in the manner I seek? Preferably, there would be reliable enterprise support for the tool (say, telephone support). Or, would I be better off scripting compression periodically using ditto? Is HFS+ compression even the right path?


First you should figure out whether or not compression is worth it. This depends largely on the type of content you're storing. If the content is not compressible (JPEG images, most video formats, ZIP archives, etc.), there's little benefit and the added overhead of decompression may even cause a (minor) slowdown in file access.

HFS+ compression is most likely the wrong tool, for several reasons. First, compression isn't transparent, only decompression is. That is, if a file is stored compressed, it will be transparently decompressed when being read, but a newly created file will not be compressed by default.

Worse, when you overwrite or append to a compressed file, it will once again be stored without HFS+ compression. Therefore, if you wanted to use HFS+ compression with user data, you would need to copy the entire volume first (using ditto or afsctool), file by file. On 60 TB this might take rather a long time. Furthermore you would regularly have to run a process that determines which files were added/modified recently (or aren't compressed) and (re)compresses those.

As the ditto man page states, HFS+ compression "is only intended to be used in installation and backup scenarios that involve system files". It's great for your /Applications folder, but not very suitable for your filer. Only if you're really desperate for capacity and have a lot of files that never get written to I would even consider it. Key being desperate :)

I'm not aware of any transparent file system level compression packages for OS X. ZFS supports transparent file system compression, but switching your filers OS and FS may not be an option (since sadly there's no complete ZFS implementation for Mac OS X).


There is a new HFS+ compression tool called MoreSpace Folder Compression in Appstore:

http://itunes.apple.com/app/morespace-folder-compression/id521635253?mt=12