Why do operating systems have file size limits?

What limits a file to have some maximum size depending on the Operating System?

From this page:

alt text

I do not exactly understand this. If you have the storage space, what else can be the limitation? You should be able to store as much data as you want the way you want (even in a single file) unless you run out of storage space.


Solution 1:

Filesystems need to store file sizes (either in bytes, or in some filesystem-dependent unit such as sectors or blocks). The number of bits allocated to the size is usually fixed in stone when the filesystem is designed.

If you allow too many bits for the size, you make every file take a little more room, and every operation a little slower. On the other hand, if you allow too few bits for the size, then one day people will complain because they're trying to store a 20EB file and your crap filesystem won't let them.

At the time the filesystems you mention were designed, having a disk big enough to run into the limit sounded like science-fiction. (Except FAT32, but the company that promoted it intended it as an intermediate measure before everyone adopted their shiny new NTFS, plus they were never very good at anticipating growing requirements.)

Another thing is that until the end of the last century, most consumer (and even server) hardware could only accomodate fast computation with 32-bit values, and operating systems tended to use 32-bit values for most things, including file sizes. 32 bits means 4GB, so operating systems tended to be limited to 4GB files regardless of the filesystem, often even 2GB because they used signed integers. Any serious desktop or server OS nowadays uses 64 bits for file sizes and offsets, which puts the limit at 8EB.

Solution 2:

The on-disk data structures are usually the limit. Research how these operating systems format their disks and how they track the portions of files on the disk, and you'll understand why they have these limitations. The FAT filesystem is pretty well documented on-line (see Wikipedia, for instance) and you can see that their choice of integer sizes for some disk structure fields ends up limiting the overall size of the file that you can store with this disk format.

Solution 3:

The limitation is simply due to the fact that when the specifications of the filing systems were written, it was never thought that hard drives would be that much bigger.... or other technical limitations whilst designing the specifications.

I think that nowadays, limitations in new filing systems typically go towards what the expected use will be .

... It would be hard for any technical team to release a filing system and saying that it supports 500 Petabyte hard drives without ever doing the testing on it.

My first laptop was a 286 with a 40MB hard drive... I would never imagine ever needing (or hitting the limit) of FAT at the time!

I think the current NTFS limitation is around 16TB per volume, 2TB per file... quite frankly, that is (and should be) good for some time - anything capable (or needing) of writing files larger than 2TBs usually has the ability to split files and/or similar administrative features (e.g. SQL server).

Solution 4:

Simple answer: you need to be able to read the file, so you need to be able to address the file. This access will be through datastructures that have limits. You'll be stuck with the lowest common denominator of; physical (disk, SD card, etc) limits, file system limits, and OS limits.