Incorrect # of Hugepages in `numstat`

Huge pages on Linux isn't the easiest to understand. Especially when some tools show things others do not, and everyone's doing their own unit conversions.

System wide /proc/meminfo will show the sum of all sizes of large pages as Hugetlb

Hugetlb

is the total amount of memory (in kB), consumed by huge pages of all sizes. If huge pages of different sizes are in use, this number will exceed HugePages_Total * Hugepagesize. To get more detailed information, please, refer to /sys/kernel/mm/hugepages

numastat -m will output "meminfo-like" based on per NUMA node stats in /sys/devices/system/node/node?/meminfo but it also converts units to MB. I don't know why this apparently lacks a sum of all sizes. Maybe the kernel punted on this and lets user tools do what they want with per node data? Presumably the output you got is only the 4x 1GB pages.

hugeadm (from libhugetlbfs) bases its recommended shmmax by summing each of the page sizes in /sys/kernel/mm/hugepages/. hugeadm --explain is also useful to check default and size of each pool.


Using only one huge page size might be simpler to operate. Less than 5 GB of 2 MB pages is relatively small, these could all be 2 MB. 1 GB page size works, but could be an inefficient use of space for small allocations.