ext4 file-system max inode limit - can anyone please explain?

There is no default as such for ext4, it depends on the size of the device and the options chosen at creation. You can check the existing limits using

tune2fs -l /path/to/device

For example,

root@xwing:~# tune2fs -l /dev/sda1
tune2fs 1.42 (29-Nov-2011)
Filesystem volume name:   <none>
Last mounted on:          /

[lots of stuff snipped]

Inode count:              1277952
Free inodes:              1069532
Inodes per group:         8192
Inode blocks per group:   512

[lots of stuff snipped]

As per man mkfs.ext4

-i bytes-per-inode

Specify the bytes/inode ratio. mke2fs creates an inode for every bytes-per-inode bytes of space on the disk. The larger the bytes-per-inode ratio, the fewer inodes will be created. This value generally shouldn't be smaller than the blocksize of the filesystem, since in that case more inodes would be made than can ever be used. Be warned that it is not possible to expand the number of inodes on a filesystem after it is created, so be careful deciding the correct value for this parameter.


It depends how you formatted the filesystem. Using tune2fs -l <device> you can find the number of inodes your device has, probably around 6 million inodes in your case. Every file or directory uses an inode.

As far as I know, the only possibility to increase the number of inodes is to re-format your filesystem. The -i parameter to the mkfs can be used to specify the bytes/inode ratio. The default value is defined in /etc/mke2fs.conf (on my system: 16384).

A larger bytes/inode ratio defines fewer inodes, a smaller one more inodes. The default values works well in most cases, but if you have a large number of small files, you may run into the limit and may need to format the filesystem with a smaller bytes/inode ratio.