Linux: find out what process is using all the RAM?
Solution 1:
On Linux in the top
process you can press <
key to shift the output display sort left. By default it is sorted by the %CPU
so if you press the key 4 times you will sort it by VIRT
which is virtual memory size giving you your answer.
Another way to do this is:
ps -e -o pid,vsz,comm= | sort -n -k 2
should give you and output sorted by processes virtual size.
Here's the long version:
ps --everyone --format=pid,vsz,comm= | sort --numeric-sort --key=2
Solution 2:
Show the processes memory in megabytes and the process path.
ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n
Solution 3:
Just a side note on a server showing the same symptoms but still showing memory exhaustion. What ended up finding was a sysctl.conf from a box with 32 GB of RAM and setup for a DB with huge pages configured to 12000. This box only has 2 GB of RAM so it was assigning all free RAM to the huge pages (only 960 of them). Setting huge pages to 10, as none were used anyway, freed up all of the memory.
A quick check of /proc/meminfo to look for the HugePages_ settings can be a good start to troubleshooting at least one unexpected memory hog.
Solution 4:
Make a script called show-memory-usage.sh
with content:
#!/bin/sh
ps -eo rss,pid,user,command | sort -rn | head -$1 | awk '{ hr[1024**2]="GB"; hr[1024]="MB";
for (x=1024**3; x>=1024; x/=1024) {
if ($1>=x) { printf ("%-6.2f %s ", $1/x, hr[x]); break }
} } { printf ("%-6s %-10s ", $2, $3) }
{ for ( x=4 ; x<=NF ; x++ ) { printf ("%s ",$x) } print ("\n") }
'
Make it executable with chmod +x show-memory-usage.sh
and call it like this ./show-memory-usage.sh 10
(10 => show max 10 lines)
Output Example:
5.54 GB 12783 mysql /usr/sbin/mysqld
1.02 GB 27582 root /usr/local/cpanel/3rdparty/bin/clamd
131.82 MB 1128 company+ /opt/cpanel/ea-php73/root/usr/bin/php /home/companyde/redesign.company.de/bin/magento queue:consumers:start inventory.mass.update --single-thread --max-messages=10000
131.21 MB 1095 company+ /opt/cpanel/ea-php73/root/usr/bin/php /home/companyde/redesign.company.de/bin/magento queue:consumers:start product_action_attribute.update --single-thread --max-messages=10000
131.19 MB 1102 company+ /opt/cpanel/ea-php73/root/usr/bin/php /home/companyde/redesign.company.de/bin/magento queue:consumers:start product_action_attribute.website.update --single-thread --max-messages=10000
130.80 MB 1115 company+ /opt/cpanel/ea-php73/root/usr/bin/php /home/companyde/redesign.company.de/bin/magento queue:consumers:start exportProcessor --single-thread --max-messages=10000
130.69 MB 1134 company+ /opt/cpanel/ea-php73/root/usr/bin/php /home/companyde/redesign.company.de/bin/magento queue:consumers:start inventory.reservations.update --single-thread --max-messages=10000
130.69 MB 1131 company+ /opt/cpanel/ea-php73/root/usr/bin/php /home/companyde/redesign.company.de/bin/magento queue:consumers:start inventory.reservations.cleanup --single-thread --max-messages=10000
130.69 MB 1107 company+ /opt/cpanel/ea-php73/root/usr/bin/php /home/companyde/redesign.company.de/bin/magento queue:consumers:start codegeneratorProcessor --single-thread --max-messages=10000
130.58 MB 1120 company+ /opt/cpanel/ea-php73/root/usr/bin/php /home/companyde/redesign.company.de/bin/magento queue:consumers:start inventory.source.items.cleanup --single-thread --max-messages=10000
Solution 5:
In my case the issue was that the server was a VMware virtual server with vmw_balloon
module enabled:
$ lsmod | grep vmw_balloon
vmw_balloon 20480 0
vmw_vmci 65536 2 vmw_vsock_vmci_transport,vmw_balloon
Running:
$ vmware-toolbox-cmd stat balloon
5189 MB
So around 5 GB of memory was in fact reclaimed by the host. So despite having 8 GB to my VM "officially", in practice it was much less:
$ free
total used free shared buff/cache available
Mem: 8174716 5609592 53200 27480 2511924 2458432
Swap: 8386556 6740 8379816