Why don't linux distributions default to mounting tmpfs with infinite inodes?
According to this answer it is possible to mount at least tmpfs with "infinite" inodes.
Consider this specific (numbers chosen for example purposes, I know that they're not realistic) situation:
- The tmpfs partition is 50% used by volume
- 90% of that data is inodes (i.e. 45% of the disk is used by inodes, and 5% is used by "real" data)
- tmpfs was mounted with
nr_inodes=1000
- all 1000 of those inodes are taken up by the inodes currently written
This means that the tmpfs is 50% full, but also that any attempt to write to it will result in an out of space error.
It seems to me that setting nr_inodes=0
(aka infinite inodes) would make this situation go away.
- Is there a reason that infinite inodes is not the default?
- What reasons are there to limit the number of inodes on a filesystem?
Solution 1:
Usually (ex: ext2
, ext3
, ext4
, ufs
), the number of inodes a file system can hold is set at creation time so no mount option can workaround it.
Some filesystems like xfs
have the ratio of space used by inodes a tunable so it can be increased at any time.
Modern file systems like ZFS
or btrfs
have no hardcoded limitation on the number of files a file system can store, inodes (or their equivalent) are created on demand.
Edit: narrowing the answer to the updated question.
With tmpfs
, the default number of inodes is computed to be large enough for most of the realistic use cases. The only situation where this setting wouldn't be optimal would be if a large number of empty files are created on tmpfs
. If you are in that case, the best practice is to adjust the nr_inodes
parameter to a value large enough for all the files to fit but not use 0
(=unlimited). tmpfs
documentation states this shouldn't be the default setting because of a risk of memory exhaustion by non root users:
if nr_inodes=0, inodes will not be limited. It is generally unwise to
mount with such options, since it allows any user with write access to
use up all the memory on the machine; but enhances the scalability of
that instance in a system with many cpus making intensive use of it.
However, it is unclear how this could happen given the fact tmpfs
RAM usage is by default limited to 50% of the RAM:
size: The limit of allocated bytes for this tmpfs instance. The
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.
Many people will be more concerned about the default amount of memory to an amount that matches with what their application demands.